Печать

Why did you decide to go into this bar and not that bar?

On this question may depend the future of AI, and perhaps humanity. By posing it, AI researchers shift artificial intelligence from the source of convenient robot helpers to the thing that unlocks humans’ limitless potential.

Or so we thought after our conversation with lead researcher Jeff Clune of the Evolving AI Lab. We sought out an unorthodox AI pioneer, and Clune did not disappoint. He’s at the leading edge of his field if you go by such things as his output of published papers. We don’t go by that, though. Clune stands out because he is trying to teach robots not so much consciousness as sub-consciousness; and he wants to teach that sub-consciousness to evolve, just like ours did.

Of course, our choice of what bar to go into typically, at least for most people, emerges from the subconscious. Clune points out it’s hard to say afterwards why we chose one after the other. Why is this so important for AI?

Well, if Clune has his way, the evolution of the robot mind will eventually produce a robot subconscious: they will interact socially, away from us; they will desire to play; they will be curious, and generate art, and solve complex problems the way we do. Not so much through rational heavy-lifting, but through that spark of insight, the one they have in the robot-shower, that tells them the answer lies in one direction and not the other. Robots will have serendipity and produce novelty.

For humans, novelty is way more important than efficiency. In the future, our milk carton will order more milk when it’s empty, and have it delivered. That most certainly will not change humanity. What will is robots that ask different questions than we do; robots that surprise us with their creativity and spark; robots that help us see the world in a completely different way. Those are the robots Clune is creating the foundation for. We were fascinated to hear how an AI researcher may influence the course of our future. We think you will too.

To learn how Dialog can help your business, contact us at 512.697.9425 or LetsChat@DialogGroup.com.

This article was originally published in design4emergence, a network science community sponsored and nurtured by Dialog and Panarchy, the world’s first network design firm. 

peacock
We are all to varying degrees prisoners of our metaphors. This is true in both our heads and when we communicate with others. And it’s definitely true in business. Depending on your industry, any number of metaphors can guide the acquisition of new capabilities. However, there is one global metaphor shift that will help you and your organization succeed in the Network Age, and that is shifting from a mechanical to an organic worldview – from a view of your business as a machine to an actual living ecosystem.

In the last few decades, the predominant metaphor has shifted in nearly every one of the sciences from mechanistic to organic. Some scientists have even posited that the universe itself may essentially be a living, complex organism. So it’s just time business updated its metaphor as well.

Machine Management

The machine metaphor is ubiquitous. From the dawn of the Industrial Revolution, we have used metaphors about machines to communicate about work. When things are running smoothly we say they are “humming along” or “it’s well oiled”. Likewise, if we encounter a problem that needs to be fixed, we simply “re-engineer” the machine.

Frederick Taylor, the pioneer of industrial age management, used his stopwatch to measure the motions of people as parts in a machine. This was the machinery of the Industrial Age, and make no bones about it, this reductionist approach yielded unprecedented gains in productivity and material wealth.

It seemed rational to treat people like parts in a machine when the way that you unleashed productivity was by putting people into factories. Those factories required huge outlays in capital. It therefore made sense to make decisions around the binding constraint on growth – capital itself. Measures like return on equity and the DuPont formula arose to facilitate rationing of capital and ensure its application to highest return opportunities.

Is ROE Capitalism’s “Runaway”?

The cornerstone of the Industrial Age business measurement system was ROE. Several years ago the Harvard Business Review published an article entitled Runaway Capitalism. A “runaway” in evolutionary terms is when natural selection and sexual selection become decoupled, as in the case of the peacock’s tail. The tail offers such evolutionary disadvantage that peacocks would be extinct were it not for the fact that humans like to collect them. Yet peahens are drawn to the tail, so it continues to be selected for by the species even though the natural environment does not support it. The author’s point was that ROE may be capitalism’s runaway. And the implication is that it may threaten the continued survival of the planet and the species.

The Challenges of a Machine Metaphor

If the above is true, then perhaps the biggest challenge presented by machine management is the belief that everything should myopically serve the machine. And that one number, ROE, represents “the One Ring to rule them all” in the machine metaphor.

Other challenges with the machine metaphor include the fact that it stifles initiative. Henry Ford captures the essence of the machine mentality when he asks, “Why is that when I inquire for a pair of hands, they come with a brain attached?” A business that is run solely as a machine is not adaptive. It is too cumbersome and slow and fails to consider externalities. It isn’t designed for emergence. Or as Carolyn Hendrickson, a Ph.D. in organization design quipped, “Why don’t matrix organizations work? Because the mind that designs the matrix is not the mind that inhabits it.”

One of the challenges of a machine metaphor is that we tend to apply it to ourselves when increasingly network scientists are showing us that we are ourselves the products of the networks we engage in.

Despite these problems with machine view, there are new perspectives that can include it in a new larger whole – that of the complex ecosystem.

The Power of a New Metaphor: The Complex Ecosystem
Modern organizations are composed of complex living systems or networks. The metaphor of an ecosystem implies we are part of a community of living organisms, intertwined with nonliving components like technology, all interacting as a system.

This boundless system goes far beyond the physical boundaries of the firm and includes psychosocial characteristics as well as the material elements of our supply chain. It also includes the emergent outcomes of local agents acting upon very simple rules (which is why a rigid three-year plan won’t work). In order to compete and stay relevant, we need to stop managing what we think of as static machines and start nurturing our boundless dynamic ecosystems. Some principles for a networked ecosystem design could include the following:

1.Think like a Gardener: assemble, shape, influence, enable and nurture.

Recognize that some problems like the “diabesity” crisis cannot be directly solved. For certain classes of problems what scientist call “wicked problems” we must focus on building capabilities for solutions rather than solutions themselves.

2. Start with Purpose. Purpose has been proven to be the factor that enables firms to outperform their contemporaries over extended periods of time. Purpose is the first step in our “strength from the inside out” methodology of ecosystem design and orchestration. Purpose gives a network energy, and it is a natural north star. The HP Way offers a classic example of purpose as an ecosystem design principle. It is as much a set of values that provides the basis for how people in the ecosystem will treat each other as it is an explicit statement of what the ecosystem is designed to accomplish. In complex networks, it is simple rules acted on at the local level that create the network experience. In a business ecosystem, these values act as simple rules that guide the behaviors of decision-makers locally and empower change at the network level. Strategy in the network age is about communication flows and the incentives and relationships driving them. In times of change and uncertainty, values and purpose can provide the DNA for a network structure by enabling communication flows and aligning intellectual, human and social capital.

3. Design for the Whole. Tim Cook tells the story of coming to Apple for the opportunity to work at the seams between hardware, software and communications because in his words “that’s where the magic lies”, Leaders who are network designers and ecosystem enablers know that the value you capture cannot exceed the value your ecosystem creates. In this sense, Michael Porter was wrong: There aren’t five forces, only one. In nature, waste equals food. Likewise in business, in the strongest ecosystems the parts feed each other. Disney’s ecosystem is a great example of a synergistic or integrated business ecosystem in which one innovation feeds the others. When a new Pirates of the Caribbean comes out, for example, the studios make money, the theme park gets a new ride, the franchises sell toys, the brand sells licensing agreements, and each of these entities benefits because the network is designed so that novelty within one part brings activity to the others.   In working with clients across industries, we have found there is opportunity to have outsized share of voice and corresponding share gains by helping to solving larger pressing ecosystem problems even if you are not an ecosystem creator like Apple or Disney.  On a more modest scale, taking the bigger view allows one to mobilize resources from complementary providers and other stakeholders.

4. Consider the Parts when Designing the Whole. It is widely known that businesses increasingly win on experience. Increasingly, companies compete by creating platforms upon which customers co-create and it is therefore nearly impossible to separate the network experience from the user experience. It is also the case that we are increasingly a cyborg network now.  The best chess teams in the world are currently centaurs- half man half machine.  How are you using to technology to augment your team’s abilities?  And do you understand all the facets of your network experience?

5. Make your People Network Designers. Empower everyone at your company to understand and shape network behaviors—in short, to design for emergence, because as Churchill said, “We shape our Buildings initially. Thereafter, they shape us.” The same is true of our social structures.

To learn how Dialog can help your business, contact us at 512.697.9425 or LetsChat@DialogGroup.com.

This article was originally published in design4emergence, a network science community sponsored and nurtured by Dialog and Panarchy, the world’s first network design firm. 

dg_d4e_01_hackingboredome_mt_1-1

The holidays are here so it’s time for us to jump on the bandwagon with a friendly PSA: If you go and buy that giant inflatable snowman to keep up with the Joneses, you’re playing into positive feedback loops, and before you do that you should know that nature abhors stagnation. I don’t mean to be a shit, but positive feedback loops are destructive like sneaky, malicious hurricanes and that’s bad news for you unless you’ve got something you want destroyed.

main-street

Now, there’s a time and place for creative destruction and weirdly enough network science can help you find it. Hang on a minute and I’ll illustrate.

When I was fifteen my English lit teacher gave me this really old copy of Main Street. It wasn’t the best book I ever read but it really got down on the physiognomy of boredom. Through his tale of Carol Milford, a cosmopolitan young woman who moves to a small town with her new husband, Sinclair Lewis meticulously unpacked the pathology of sameness as a slow, painful killer of culture and community. The story went boldly into the beige, laying bare the domestic misery that it seemed to my teenage self everyone was ignoring. I was repulsed and impressed. I remember climbing onto the roof of my suburban house gasping for fresh air, wondering, “How small can people really make their worlds?”

Pretty small, it turns out.

Fourteen years later (a Saturn cycle! the astrologer says, as if everyone should purse their lips and nod their heads), and I’m sitting in couple’s therapy with my (now ex) husband. What’s that hanging over the counselor’s desk? Why, a familiar illustration from a limited edition first printing of Main Street.

So now I’m paying attention.

On the other side of divorce I became fascinated by the psychology of boredom. Whose fault is the unhappiness that results? Should you blame circumstances or yourself? The small town, or your own resistance to conformity (a cultural positive feedback loop)? Is there a link between boredom and intelligence?  On the other side of the same coin is curiosity, which does have a link to intelligence. Studies show that a tendency to report frequent feelings of boredom, a trait scarily prevalent among people with narcissistic personality disorder, may be a function of the quality of one’s self-awareness. Boredom tendencies run higher in individuals with lower absorption (a measure of attention span–no surprise there) and in individuals with negative self-awareness tending toward evaluation and judgment. No wonder narcissists, who constantly seek external means of self-validation, are notoriously whiny about their listlessness. Boredom is something we all experience at one time or another, and it may have an important evolutionary function: inciting experiences of pattern interruption. But before it does that it can make you stupid and dull. Temporarily.

In his article “The Surprising Power of an Uncomfortable Brain,” Garth Sundem, author of Beyond IQ and Your Daily Brain, illustrates with friendly snark that “a brain shocked from its easy complacency functions better than a brain kicking along on autopilot,” whereas the repetition of familiar situations can lull your brain “zombie-like into the halls of mindless consumption.” In his article, Sundem sources several cognitive research experiments showing that people whose brains encountered situations where expectations and reality were mismatched performed better on cognitive tests because their brains switched from associative to rule-based systematic processing. The English: encountering the unexpected wakes up your brain. Anything that enforces “cultural dysfluency” should do the trick. Including culture shock. So while the culture shock of moving to a new town may be temporarily invigorating, the newness eventually wears off and the sameness can be stifling, prompting a person to seek new forms of pattern interruption.

So let’s assume you’re Carol Milford of Main Street and you want to look at boredom as a function of your network:

Depending on your own biases you may think people are chaotic or predictable, but they’re not either of these things all the time. What they are is complex, meaning they’re affected by all the feedback loops that run between them and their environment. Boredom is the product of a feedback loop between your brain, your environment, and your perceptual narrative.

You should know that complex networks (personalities, relationships, markets, and even Main Street) are characterized by feedback and have three tell-tale behaviors. If Carol Milford had understood network behavior, she might’ve take more responsibility for her own happiness from the get-go, moved somewhere more interesting and saved herself the effort of trying to transform the culture of the town. If you know these tendencies you can save yourself a lot of trouble, and if you bear with me I’ll tell you how.

Attractors – these are places where the network is moving toward some kind of equilibrium. The beginnings of order. (In our Main Street scenario, something happens and people are drawn to a certain type of behavior).

Self-reinforcement – where order begets more order. If the nodes in a network are the interconnected lives of Main Street, this is where they all keep doing the same thing because “that’s the way it’s done” and there must be some reward for doing things that way. The positive feedback loop continually validates & perpetuates itself in ways that are pretty much invisible unless you’re on the outside looking for them.

Cascades – these are shifts in direction caused by an outside intervention or an internal breakdown, as when a positive feedback loop has become so homogeneous as to be unsustainable and fragile to outside disruption. A cascade rips through it once and the network is never the same again.

Good? Bad? Neither inherently, because we aren’t talking about an abstraction–we’re talking about the fundamental structure and behavior of complex systems, and positive feedback loops always undo themselves. They either accept diversity and pivot toward greater resilience or they cascade and become something else.

Take heart though. Boredom and disruption go hand in hand, like everything else with its opposite (ever had a week of artistic frustration only to have a colossal breakthrough on the other side?) Periods of boredom and listlessness in human beings often spur discovery. In business, innovation clusters explode when a company breaks the lack of competition (a positive feedback loop) by doing something different that the network was ready for: disruption. Eventually you have the Big Idea, or someone else has it for you, because The Next Big Idea is always riding the cresting wave of the network.

The way the world works is fundamentally about linkages. Taiji master Ben Lo said that whenever you embody yin you also embody yang. A system always embodies the whole circle, and here is where the power is, because it allows for movement into one state to create disequilibrium, which incites a system to move and change in order to regain equilibrium. Nothing can be yin without yang. So if you think about it in terms of network dynamics, boredom is your signal to seek a new stimulus (internal or external) or it will seek you. One way or another, everything in a complex system shifts.

Again: A local network either invites diversity and changes, or it stays the same so long that it becomes fragile, unprepared to adapt to perturbations from its external environment. Then a germ comes along from across the pond and destroys an indigenous population, or incumbent tech company doesn’t see the little guy rising up in time… or a marriage runs into trouble and doesn’t make it. Either way change comes and you get to choose a new direction.

If you aren’t designing for emergence you might get comfortable and mistake positive feedback loops for equilibrium–when what they really are is pent-up order. Emergence will happen anyway. Novelty always prevails over habit, else networks crumble and end up on the forest floor, where as cultural detritus they give new life to emergent forms. This is the way of life.

Acrobats know that you have to move constantly to find balance and stillness. Sometimes those movements are imperceptible, but they are what allow you to keep your footing.

No doubt Sinclair Lewis quelled the demons of his own small town boredom by creating a world where he could shine a light on its secret interiors. For Main Street‘s Carol Milford, emergence did not produce a cultural renaissance in Gopher Prairie, as she’d hoped. A lot of people (myself included) got pissed off about that. But she was a network of one, and did not have the agency with which time and entropy eventually overcome all homogenous networks and the small towns that personify them. Instead, emergence produced in the small networked world of her mind a new way of seeing, a new frame of mind–one that told her she’d be ok no matter what happened. This peculiar marriage of aloofness and intent is the sweet spot where a human being can find agency in a network.

Memes matter, but not so much as mutability. Designing for emergence, or as Alfred North Whitehead might have put it, seeking ordered forms of novelty and novel forms of order, produces the lucky buds of change that networks nurture into memes, which, once they spread, flower into disruption. What happened when readers of Main Street integrated what they saw there into their own worlds certainly changed some minds, an emergent process that continues in immeasurable ways to this day. (Otherwise people wouldn’t hang it over their desks as a symbol of personal transformation.)

Main Street isn’t real. It exists in your imagination and you can leave at any time. The self-organizing nature of the universe always pulls novelty from the battle between order and entropy. Boredom leads to discovery. So before you go and copy someone else’s strategy, sit with your boredom for a while and allow the network to enable emergence.

Network dynamics dictate that everything changes, and you get to choose whether to accept that or the inevitable cascade that comes to wash away the sameness. Either way, we promise it won’t be boring.

To learn how Dialog can help your business, contact us at 512.697.9425 or LetsChat@DialogGroup.com.

This article was originally published in design4emergence, a network science community sponsored and nurtured by Dialog and Panarchy, the world’s first network design firm.

d4e_1_cs_algorithmsboundaries_1920x1080

“Ideally, we want News Feed to show all the posts people want to see in the order they want to read them.” -Facebook

What Are Algorithmic Values?

From our perspective, the purpose of a recommender algorithm is simply to give us the content or products we really want to see.  There is a problem right off the bat. We don’t know what content or products we really want to see. If we did, we wouldn’t need a recommender engine! In steps a type of algorithm called, “collaborative filtering”. If you’ve viewed all the Judd Apatow (Director of Knocked Up) movies, the algorithm could observe that other Apatow fans have also ordered Anchorman. Out pops the recommendation.  What? You’ve already seen Anchorman five times on a separate site, or at a friend’s house? The recommendation is just noise.

Here is the thing, though: it costs the site very little, on the margin, to deliver a recommendation with no value. Any increase in incremental clicks from populating your recommendations is gravy for them. This creates a fundamental imbalance: your time is more valuable to you than it is to the algorithm. Any improvement is clicks or time spent on the site benefits their designer’s bottom line, even if it doesn’t quite benefit you as much. Netflix gingerly steps around this issue: “We are not recommending [a movie] because it suits our business needs, but because it matches the information we have from you.” Hmmm… It might be more accurate to say that using the information they have from you serves their business needs.

What’s In It For Us?

Well, watching the TV news, for instance, or searching through IMBD for a movie, or asking a friend, they also take time. Either way we have to wade through useless information. Perhaps the Facebook News Feed algorithm is a big improvement. Or maybe it isn’t, but we are on Facebook a lot anyway, so why not. We learn to adjust to recommendation noise over time, perhaps mentally filtering out irrelevant stories or obtrusive ads as we, say, read through our Facebook News Feed. This last dynamic is important. “Digital inertia” keeps us walking down the path that the sites we depend on have laid out for us. Once we swipe through online dating profiles at a mind-numbing pace, well, we just get used to it. This is “just the way things are”. In a sense, we are trapped by this new world-view. Whatever values we had before, now we have a set of new ones that benefit the algorithm provider. As our accompanying interview with Spritzr dating app CEO Manshu Argawal describes, this shift in values may not be to our individual, or especially, our collective benefit. After all, when we enter the portal that we expect will connect us to a whole world of possibilities, what we’re really hoping is that it’ll be the scaled up equivalent of taking a walk down a friendly street we’ve never been down.

You might not think that algorithms are all that invasive. After all, the Internet is huge and full of noise (and sometimes, rife with dumb.) It’s a self-organizing map, a web of connections whose pathways are forged by whatever pilgrims made them first. Just like ants find scent trails by detecting the pheromones of the hungry ants who traveled before them, or the way neurons that fire together wire together, we leave a trail when we go from one site to the next, and that trail is recorded by an algorithm that assumes we liked our route. And so it recommends future itineraries based on what we’ve already seen before. That’s great – better than if it hadn’t paid attention at all, right? But what happens when the recommending algorithm knows you too well? Perhaps you roll over in bed one morning and open your news feed and it anticipates your interests so accurately that, to your dismay, the app that once made you laugh into your morning coffee or forget all about your boring train ride no longer has anything interesting to say?

There’s almost nowhere we go that we don’t take our mobile device with us. There are no more closed doors. It’s seen our embarrassing searches and medical questions, it knows all the dumb vines we liked. We can’t go back to first dates and first impressions. It thinks it knows who we are. Will we fall out of love?

Mystery, discovery, surprise. These are on the mind of Jarno Koponen, a network science enthusiast and developer of Random, the App. It’s guys like this you might expect to design something like the frighteningly capable and caring AI companion in the movie Her. Koponen seems to understand that the Internet, as a complex network, is in a sense a wild frontier that fluctuates between signal and noise, order and chaos. Too much chaos and links are weak, and you’re on your own in the search for relevant information. Too much order and you could get stuck on Main Street, your preferences over-defined by algorithms that attempt to guide you by making assumptions about your activity and comparing you to others. Learning algorithms are humanity’s early attempt to curate culture and relevance just like we have done on every other frontier. But these algorithms now need to learn boundaries, need to learn when we need some space to take a walk alone and be surprised.

And so dawns the age of the discovery engine. There are lots of ways to invent one, but no one’s yet done it comprehensively.

Koponen proposes the creation of personal algorithms, an “algorithmic angel” if you will, that would give us better visibility into the kinds of things that affect what information is curated for us. Today that information is mostly kept safe and proprietary by the designers of the interfaces we use. For instance when you like or comment on a post in Facebook, you don’t know exactly how that will affect your feed. These personal algorithmic systems would be ours–an ambient layer on our explorations that would be truly personal, evolving with us as individuals, taking our values into account and adapting to us as well as providing a means for discovery. They would interface with recommending algorithms, keeping them in check and making sure we have priority agency in the content environments we explore.

“For many people personal data is abstract,” Koponen says. “Generally we don’t have a lot of awareness about how our data is being used and how it affects what we see. How could this data be powering experiences that are more in tune with who we are as individuals?”

An Experiment in Discovery

Kopenen’s app, Random, aims to make your subjective reality a starting point when recommending new topics and content. The New York Times described it as minimalist and almost toylike, probably because it’s simple and yet it inspires curiosity.

algoportrait1

Random presents you with a screen tiled full of topics and when you click on one, it gives you a bunch of articles related to that topic to choose from. That helps the algorithm learn quickly, and each time you open it, your spread of topics is a little different. There are familiar subjects and some that make you go “hmm…”

“It doesn’t have a lot of utility yet,” Koponen says modestly, “but as a paradigm it could be made more comprehensive and approachable, to evolve into the kind of experience that gives you even more agency.” Like the algorithmic angel that hasn’t quite been invented yet. As the AI researchers in our podcast feature pointed out, discovery is an important part of the human experience, and so it should be an important part of what our technology enables. Currently, Random learns and adapts to your preferences but also uses the network map of this data to enable surprise and discovery–to create a balance between relevance and serendipity.

Let’s say you’re into design, sushi, Apple, and travel. In Random, these are not categories per se, but points in a huge network graph that create your personal profile in the universe of the app. Nothing is truly random, of course. Surprise comes from:

1. Your personal choices

2. Expected correlations with other similar people

3. Trending topics

Where trends are concerned even though that particular connection may not be found in your profile, these topics are so popular at a given time that it’s likely that you’ll be interested in them. There was a bombing in Paris. Paris is something you’ve shown a lot of interest in, but you don’t always want to hear about bombings. Random takes that longer arc into account.

To take you beyond your current personal interests into new territory that won’t feel obtrusive, Random does an interesting pirouette, leapfrogging behind the scenes using subtle links within the content you consume. It looks for stepping stones. You might ask why you suddenly see an article on algae.

“Because of the interface and its underlying dynamics, it’s possible every now and then to bring in a wild card,” Koponen says.  How is that different than anything Facebook or Twitter or Pinterest does? Because it’s just one of many choices that are presented to you, not an ad you have to look at.

You might like design, so somewhere back in the articles you read or someone like you read there was a design article that was related to bioengineering and had to do with algae, and it somehow involved the design process. So now there’s just one suggestion for algae, and you don’t have to click on it unless you’re curious. There are many other choices.  (Personally, I’d be curious enough just to know what the connection was).

What Does the Future Look Like?

Koponen is a humanist, so he’s always asking technology how it’s taking our personal values into account when it uses our personal data, because what we consume feeds what we create, and all of this adaptive content universe will affect how human culture is curated–in other words, what our future looks like.

We want what we want, and even that’s hard enough to figure out, much less explain to a computer algorithm. That’s because most of those preferences bubble up from the subconscious, a far more complex network than anything we’ve ever built. We don’t want to look at the same things we’ve always seen before, but we don’t want to be insulted by stuff that’s too far out–  jarring experiences that break our technology’s rapport with us.

We are creating our world even as we experience it through our unique perceptual filters. It shouldn’t come as a surprise. Machine learning–recommendation, discovery–is only reflecting that process and making it more obvious.

We created different media to ensure that we have access to the information that we consider valuable, meaningful. Something worth keeping. The key here, Koponen says, is that there will be technology creating information for us, that can be used as a mediator to curate things for us. Culture is a repository of our connections, and it connects us to one another. But it also thrives on diversity. When machines are curating culture, we want them to understand that reality is subjective, but when it becomes too subjective it isolates us.

Personally, I want to understand how my culture, my network, is evolving–especially when machines are creating and making choices about the world that I see. Send me an angel already.

To learn how Dialog can help your business, contact us at 512.697.9425 or LetsChat@DialogGroup.com.

This article was originally published in design4emergence, a network science community sponsored and nurtured by Dialog and Panarchy, the world’s first network design firm.

DG_D4E_01_SteppingStones_MT_1.1_600x400

“If your objective was to invent a microwave oven, you would not be working on radars.”

These days, amidst a great collective effort to reverse engineer innovation, everybody’s looking to model the success stories. Tales of disruption pepper our social media feeds, and we want the magic formula—the algorithm—for innovation.

While magic is tricky, success is even more deceptive. That’s because our measure of success, the objective, is “blind to the true stepping stones that must be crossed.” These are the words of Joel Lehman and Kenneth Stanley, the inventors of a breakthrough evolutionary algorithm for robotic neural nets, called novelty search.

What do robot brains and algorithms have to do with our current paradigm of innovation?

At the Evolutionary Complexity Research Group (EPlex) at the University of Central Florida, Lehman and Stanley programmed their AI to abandon their objectives and search for novelty, much like nature’s evolutionary “algorithm.” “Do something you’ve never done before,” they told the robots. They put them in a maze. Guess what? The robots with the novelty search algorithm got out of the maze faster than the ones armed with a plan and a list of best practices. In other words, objectives actually hindered their search. Freed from them, they stopped banging into walls and learned to walk. Are we so different?

Disruption and adaptation ensure the survival of a species, a business, or any agent in a complex system. A network takes in diversity and puts out emergence (the real hero of anyone’s innovation story).

Case in point: two artificial intelligence researchers who use evolution to program artificial neural networks that “learn,” and end up writing a book about Why Greatness Cannot Be Planned. Are we approaching innovation all wrong by holding it against too rigid standards?

So if you want to design for emergence, the scientists in our interview say, the name of the game is to be a treasure hunter. The path isn’t always clear until it’s behind you. Go where curiosity leads you in search of novelty, whatever seems interesting, and you’ll begin to collect the right “stepping stones” for that next big thing…

d4e: Ken Stanley and Joel Lehman, two AI scientists, you wrote a book about Why Greatness Cannot Be Planned. How did that happen? (I’m guessing that wasn’t the plan.)

Ken: There are a ton of self-help books about how to pursue greatness and achieve your potential. A lot of it is speculative and philosophical. What’s unique about our perspective is that we’re offering hardcore scientific empirical research and experimentation that supports the approach that we’re advancing in the book. So people reading this book looking at these ideas can feel a certain level of confidence that they don’t normally feel about where these ideas come from: We weren’t trying to become self-help gurus; we were doing experiments in artificial intelligence. We unexpectedly stumbled on the principles we describe in this book about why greatness cannot be planned.

d4e: The Chinese finger trap is a metaphor for innovation. Why?

Joel: In the Chinese finger trap, the steps that you need to take to solve the problem are exactly the ones you wouldn’t expect would lead to the solution. It’s a model of deception in innovation, in that making a breakthrough discovery often involves taking steps that are seemingly unrelated to the objective.

Ken: It’s the simplest example of this type of innovation process which we’re claiming is very common, where what you need to do looks like it’s exactly the opposite of what you want. It turns out you need to do exactly the opposite of what you think you should. The Chinese finger trap is designed to be deceptive in that way.

You have to push yourself more into the trap to get out of it. The problems of life are far more complex than that, though, so they’re going to be even worse than a Chinese finger trap in terms of being deceptive. If they weren’t, we would just solve all of them. In order to escape the Chinese finger traps of the world, we have to sometimes be willing to step into the unknown rather than go in the direction that’s obvious or “correct.”

d4e: Great invention is defined by the realization that its prerequisites are in place. Apple spends much less than its competitors on R&D. Do you think that those two ideas are related?

We could speculate that people put a lot of effort into pursuing an objective, and that can be very expensive, because maybe the right stepping stones just haven’t been laid. So you’re going to be grinding for a long time to create all the prerequisites you need to get this thing to work. Whereas if you take an unusual approach (and I would be willing to bet that Steve Jobs wasn’t very objective-driven) where you don’t follow an objective path, you can sometimes arrive somewhere interesting and valuable with a lot less effort than someone who is following an objective. People like Steve Jobs seem to have a knack for following those types of trails and taking the kinds of risks that are necessary, and saying, “Let’s just see where this leads.”

d4e: How did an algorithm change your life? Was it a eureka moment, or a slower evolution?

Ken: This question gets to the origins of the idea behind novelty search. There was actually a particular eureka moment before this algorithm that led to the novelty search algorithm, but also later there was the gradual dawning for both Joel and I, that the algorithm is really a way of thinking about life.

Before novelty search, there was an algorithm called Picbreeder, which is a website that we put up in our research group for people to come from the internet to breed pictures, and then publish them on the site. That sounds a little strange, but basically it means that you could come in and pick your favorite picture from a set, and it would have offspring. And the picture’s “children” would be slightly different from their parents — just like if you had children, they wouldn’t be exactly the same as you, but not completely different either.

These experiments exposed a flaw in the paradigm of “innovation through continual improvement.”

I had an experience playing with Picbreeder, where I started with an image that looked like an alien face. I was playing with the image, and it eventually bred into a car. This moment when the alien face turned into a car was the epiphany moment when I was struck with the realization that I had achieved something interesting without trying to achieve it. While it may sound trivial — after all, Picbreeder is just a toy — everything I’ve been taught for years in computer science said that the way you make computers do things — in fact the way we as humans generally do things — is to set your goals and somehow help the algorithm push the computer into the direction of achieving that goal. But this experience was so different than that.

I was breeding these pictures myself, but we have evolutionary algorithms that breed automatically as well, without human assistance. So I realized that this experience of achieving something without trying to achieve it probably has implications far beyond a picture breeding service. This led to the proposition that there could be an algorithm that doesn’t have a clear objective.

This is what I began to speak to Joel about before the novelty search algorithm was created.

d4e: So the idea of discovery without objectives led you and Joel to create the novelty search algorithm. You say that novelty search is paradoxical. How so?

Ken: The novelty search algorithm reflects the philosophy that sometimes you can discover things if you’re not looking for them. It gives the computer the ability to have serendipitous discovery but not necessarily be pigeonholed in the direction of trying to search for one thing and one thing only, or create one type of solution to a problem. Instead of a robot that has one type of walking gait, for example, maybe you have many.

We were playing with this for years, and it would constantly surprise us by doing things that people wouldn’t expect. You don’t tell the computer what to do, but it ends up solving your problem better than if you did. We saw this paradox over and over again. After a few years we realized that we were seeing was about more than a computer search algorithm.

The more I spoke about the algorithm at computer conferences, the more people would ask about things unrelated to computers, such as: What does it mean for my life if sometimes the best way to find something is to be not looking for it? Does this have any broader implications for how we run innovative cultural institutions? Or how we run science?

Or how about the way we support innovation in society?

It became apparent then that it is extremely important that we have this discussion as a society. If objectives are not always the way to guide innovation and scientific progress, then why is it that almost everything we do is objective-driven? That’s when we decided to write a book, because this kind of message is hard to get out in a computer science journal article aimed only at artificial intelligence. This is a much broader issue, in terms of how we foster innovation and treat objectives in our culture.

d4e: In your book, you ask us to imagine a cavernous warehouse of all possible discoveries. You say that “the structure of the search space is just plain weird.” Can you tell us what you mean by that?

Joel: The structure of the innovation space is weird in that it’s hard to predict where certain things will be. The linkages between different kinds of innovations are surprising. That relates to the broader area of serendipity in science or artistic realms, where you might inadvertently create the next big thing. A typical example is the vacuum tube, which was created as part of fundamental research into electricity. The person who was exploring that didn’t have the idea of a computer in mind. It just turned out that from this one point in space, from discovering a vacuum tube, you actually could reach computation.

Ken: Vacuum tubes facilitate computers, and that’s a connection that exists in this big “room” of possible things. But who would ever know that? Somebody later picked up on it and said, “Now that this exists, now we can create this other thing.” There’s a lot of opportunity there for serendipity, in the sense that you wouldn’t even be working on vacuum tubes if your main interest was computation. Vacuum tubes don’t look like they have anything to do with computation. So in some way, to get all this stuff to exist, requires that people sometimes are not working intentionally on the ultimate achievement that stems from the effort that was put into this chain of events.

d4e: Order is important in search. How so?

Ken: When you first hear about novelty search, that we should search for things that are recognized for their novelty and ignore everything else, our intuition might say, “This is just random. How can that kind of search be beneficial?” I think people assume there’s some kind of coherent order that search induces. In other words, we assume that things get better as you continue to improve. That’s an order that we’ve come to expect from an objective — like if you’re trying to get better at school, your test scores will go up. We expect to start out low and get higher, and that’s the kind of order we’re comfortable with.

Whereas with novelty, it’s harder for us to think about what the order of occurrence is going to be, because we’re no longer talking about an objective metric. What we try to argue is that there is an order that’s inherent in a search for novelty — it’s just a different kind of order, one of increasing complexity.

Instead of increasing quality along some objective metric, novelty search basically creates a situation where if you continually try to do something new, you will quickly exhaust all the simple things there are to do. There are only so many simple ways to do things. By necessity, if you succeed in continually seeking novelty, things will have to become more complex over time.

When it comes to innovation, maybe we should loosen the reins just a bit and integrate some of the knowledge that we’re gaining in our scientific understanding of natural evolution.

At some point, somebody invented a wheel. Thousands of years later, someone was on the moon. Things don’t go in the other order. You don’t figure out how to go to the moon and then later come up with the wheel. So there is an order in innovative processes that are driven by invention rather than by trying to achieve a specific objective metric. And that order tends to be the increasing complexity. The reason I bring this up is that there’s good reason here to be confident that the search for novelty does have some kind of coherent principle, and it is anything but random. It’s just that it’s not following the order that we’re used to (of “worse to better”).

We wanted to suggest to our readers that going worse to better is actually not that principled, even if it makes you feel comfortable, because of the fact that it’s a mystery how to do it. We don’t necessarily know what the stepping stones are. So it’s really just a security blanket to say, “I’m going to keep on improving” if you don’t necessarily know how that’s going to happen.

d4e: The age of best practices is over. Would you agree with that?

Ken: There is room, despite everything we’ve said, for trying to improve. But we have to be clear about where that process is appropriate. If your aims are relatively modest, it can be entirely appropriate to just try to improve. If you just want to try to improve your lap time, that’s reasonable. But when it comes to fostering innovation on a larger scale, I’d be ok with endorsing the idea that the age is over, because we should have a revelation that simply trying to continually improve in an objective sense just doesn’t work.

There’s a great opportunity for a paradigm shift here. The amount of information we have now from artificial intelligence is starting to expose problems with the traditional view of achievement and innovation. Our book exists because we had the ability to do experiments that would have been impossible in the past. These experiments exposed a flaw in the paradigm of “innovation through continual improvement.”

Joel: And yet it seems that at the same time, the cultural crest is pushing more toward the paradigm of objectives and continual improvement. We have evidence that this isn’t how the world really works, especially in areas of innovation, discovery and creativity. It’s troubling that so many innovation endeavors are still ruled by objective-based approaches. When it comes to innovation, maybe we should loosen the reins just a bit and integrate some of the knowledge that we’re gaining in our scientific understanding of natural evolution, and how creativity works — and some of these insights come from artificial intelligence.

Ken: There should be a paradigm shift, but we wrote the book because there hasn’t been. This is a current argument about how we should approach innovation. When Joel says we run a lot of things in this very objective-driven way, that’s literally true. Look at what we’re doing in schools. The standardized testing craze is all about objective measurement, and it’s used for all kinds of things, not just for students. We basically say the school has to objectively improve on some metric, or the school gets penalized. It’s all based on objectives, and there’s a lot of discussion about whether that’s a good idea or not, but we’re not part of that debate explicitly.

Our work offers a different angle, which says that if you kept demanding higher scores, eventually everyone would get a 100. That looks like a pretty naive approach. There should be room for people to try new things — and that could lead to scores going down from time to time. If you always penalize for scores going down, then none of those things become possible.

In the world of science funding, one of the things you almost have to do to get money for research is to state your objectives. We’re running our entire federally funded scientific enterprise — really, billions of dollars — based almost entirely on objectives. You can hardly get your word in if you don’t state in the beginning what you’re trying to achieve. It’s not common sense; it’s a problem.

d4e: There’s a book called Why A Students Work for C Students. How does that relate to this philosophy?

Ken: I haven’t read that book, and I think it’s obvious that that’s not always the case — there are plenty of A students who are the bosses of C students. But that’s an interesting question. You could imagine there’s a connection there in that somebody might assume that if you get A’s that’s the correct goal for getting to the top of the heap in some organization. In reality, often it’s the case that the route to success is more circuitous. It may be that the C student was more willing to take risks that the A student just didn’t take because the A student was so single-mindedly focused on doing what everyone says you’re supposed to do in order to be successful.

d4e: Objectively speaking, unstructured play can be bad for us as individual adults, but good for us as a society. True or false?

Ken: I would say false, because I think it can be a good thing for individuals and society. Unstructured play can be risky, though. It may lead to no particular advance to the individual; on the other hand, it may lead to something great. You just can’t be sure. You may have a hobby, and pursuing that interest may just be “play” for you, but it could end up being the stepping stone to your next great achievement.

And of course I’m totally in agreement with the idea that it’s also beneficial to society, because we need people to pursue their passions and try the things that other people wouldn’t necessarily try, so that they can build the stepping stones for others to follow.

Everybody can benefit, but we have to just accept that anything unstructured has risk. That’s why we tend to be against this kind of approach to life as a policy matter: we like to control things with standards and objectives and metrics, because we’re afraid of risk, ultimately. At the same time, you have to take risks in order to have great achievements in the end.

d4e: Let’s say I run a venture capitalist firm. How should I go about building a portfolio of startup investments?

Ken: I think venture capitalists actually put the ideas in our book into practice in a better way than a lot of other areas in society because they understand the value of a portfolio: Not all of your bets need to pay off. Just some of them need to pay off. VC’s are willing to go in some very exploratory, risky directions. If you have one big hit, it can make up for all the ones that didn’t pan out. This is, I think, a pretty good lesson for society in general. In a lot of our institutions we guard against failure as if it’s some kind of pathology to make a mistake. Venture capitalists have good instincts and are willing to have failures, and that allows them to search in a less objective way. I think we would find that the most successful venture capitalists are less objective about their portfolios.

d4e: You don’t seem to dwell much on the concept of probability. Don’t you like it?

Ken: The book isn’t really about probability, but I think we would endorse probability as an important concept. We see its importance in our field of machine learning and artificial intelligence. The point that’s being made in the book is largely independent of an in-depth discussion of probability, although it factors in to risk.

Any individual discovery could be regarded as highly improbable. In innovative processes, the likelihood of making a particular discovery is unpredictable. And yet, overall, you can increase your ability to make discoveries and the probability that you’ll make some interesting discovery.

d4e: You say that novelty is information-rich. What did you mean by that?

Joel: One way to look at novelty is that it’s information based on not where you’re trying to go, but where you’ve been in the past. In some sense, it can be seen as more information-rich than taking an objective-driven approach, in that you completely know where you’ve been in the past, and so that’s more certain. When you say “this is novel,” you can have confidence that it actually is new. Whereas if you’re trying to take a step along the way to your potential objective, you have to be willing to be uncertain, because you really don’t know if that’s going to be a stepping stone toward your goal.

More than that, the idea of being genuinely different often requires some sort of conceptual advance. You can imagine, for example, being on a skateboard. Who’s going to be more likely to create a novel skateboard move? Will it be me, who’s likely to fall on my butt, or will it be Tony Hawk, who has all this knowledge and experience to create something genuinely new? There is some ability, knowledge, or talent that’s required to create something that’s genuinely new. In that way it’s also a source of information.

d4e: Is it possible that there’s a historical trend toward us wanting more certainty? And if so, is the value of novelty rising or falling?

Ken: I think that novelty has always been valuable. What’s happening is that because of things like the internet, there’s now a significantly greater potential for the creation and dissemination of novelty. We’re exposed to much more novelty in a short time than we used to be, because the network has created this capacity to expose people to new ideas almost instantaneously and from enormous numbers of different people. That means that it’s going to accelerate the production of novelty, and we’re all going to be exposed to more, and that’s a feedback cycle. Now that there’s more novelty around, there are more stepping stones, and so more people will create novelty.

There’s a tendency to trap people in the things that they’re comfortable with, and as long as that’s making money, everybody’s happy. But that doesn’t produce the stepping stones we need for innovation.

d4e: What about machine learning and the curation of information? What about phenomena like the popularity of the Kardashians? Aren’t we suppressing novelty?

Ken: Because computers are making decisions for us about what we look at, and those decisions might cause us to not be exposed to interesting things?

d4e: Right, like the rich get richer effect. The more that machines learn our preferences, the more they are fed back to us.

Ken: I think there is that risk. We have to guard against always being given just more of what we want, what we are already comfortable with. I’m pretty optimistic about human nature and its ability to get around the tendency toward convergence. Certainly I think the algorithms will play a role in that too. Algorithms like novelty search can give us a bit of a clue about how to create computer algorithms that are not so convergent that they just always push you in some predetermined direction.

We’re exposed to much more novelty in a short time than we used to be, because the network has created this capacity to expose people to new ideas almost instantaneously and from enormous numbers of different people.

In general, we like to be exposed to stuff that’s unexpected. And we see that there’s been some attempt to do that in services like YouTube, for example. On the homepage they try to expose you to things you weren’t searching for. Of course they may base it on things you’ve searched for in the past, so there’s a bit of a paradox there.

It’s in the interest of anyone running a business to hook people into new things. People are trying to do that, with algorithms, but at the same time, the danger you’re identifying is real, and we should be cautious about it — because there’s a tendency to trap people in the things that they’re comfortable with, and as long as that’s making money, everybody’s happy. But that doesn’t produce the stepping stones we need for innovation.

Joel: One potential danger with some of these algorithms is that they can get very good at providing us with trivial novelties — novelties that are just some modulation of some formula. “The top 10 X, Y or Z.” It fulfills a very basic human desire for novelty, at a very trivial, unfulfilling level. Maybe over time people will become more aware that they’re being exploited by these algorithms. Like Ken, I’m optimistic about humanity’s ability to adapt to technologies. But it is worrisome that this very human desire for novelty can be undermined by clickbait.

d4e: Will there be enough competition in artificial intelligence for robots to evolve, given that some firms may dominate development?

Ken: These kinds of endeavors can become rather objective when a dominant firm has set the standard for success. It does potentially dampen the ability to try new things. Something really novel might not look as good. Someone might say “Our way of doing things is the objectively superior way; these other approaches are inferior objectively, and you shouldn’t invest in those.” I think that’s a problem, and we are suffering from it right now. There is a belief that there’s a canonical approach that works really well, and therefore other things should be relegated to obscurity. To shed some daylight to some of these less conventional approaches would help foster diversification. Of course, the people still need to be experts. We’re not saying that any idea off the street is worth millions of dollars; but if an expert has an unconventional idea that looks interesting, let’s give it a try.

d4e: Making distant associations and unlikely connections within the network is, to me, crucial to innovation. For us, these processes are often subconscious. Will AI have a subconscious?

Ken: I think that’s on the minds of people in the field. Generally, people in machine learning are concerned with what you’re describing as a subconscious process — the ability to make deep, subtle connections. That’s probably a little bit ahead of where the field is at the moment in terms of making those connections through algorithms on computers, although there’s certainly work being done in that direction. Anything that’s interesting about the human intellect is fair game for AI.

To learn how Dialog can help your business, contact us at 512.697.9425 or LetsChat@DialogGroup.com.

This article was originally published in design4emergence, a network science community sponsored and nurtured by Dialog and Panarchy, the world’s first network design firm. 

 to-err-800x510

As we teach AI to visually recognize things it “sees” we get some interesting imagery. In the example below, a neural net exaggerates the variation in each “version” of the picture of reality it renders, creating images that have evolved to become unrecognizable to humans. So minor variations in input appear to lead to some very wrong conclusions (Nguyen, Yosinski, Clune 2015).

diversity_40_images_label

Image courtesy of Evolving AI Lab

Similarly, our nervous systems make many edited versions of reality before an image is finally presented to our conscious mind. Visual information is compressed as it travels from the eye to the visual cortex (Itti L, et al, 2015). When scientists look into hallucinations, they believe it is based on something called “ectopic” vision. This is similar to floaters or swirls that any of us might see in our vision at one time or another. Scientists believe hallucinations occur when the brain receives “ectopic” input and tries to make sense of it, forming things that may not be there.*

16_best_looking_images_0

Image courtesy of Evolving AI Lab

Do the “hallucinations” of deep neural networks mirror patterns in the development of the human brain? And did hallucinations play a major role in the development of abstract symbolism and interior life?

Archeologists agree that something new happened to the human consciousness roughly 40,000 years ago. In cave art around the world, we see the emergence of symbolic art and fantastical half-human/half-animal creatures. Interestingly, some experts claim these creatures and drawings are stylistically identical to what modern day psychedelic users have drawn. Anyone who has seen Ayahuasca art from the Amazon will agree that it is a very distinctive style. It’s an astonishing notion, that caves around the world depict art that may be some of humanity’s earliest records of altered states or hallucinations.

Graham Hancock makes the case that humanity took a major leap forward in symbolic thinking and reasoning with this artistic development. What is interesting about his case is that he views it from inside the civilizations who had the experiences. The general claim of people, hunter/gatherer or modern, who have depicted these strange creatures in cave art is that they are sentient beings — they anticipated their human visitor and they have a message to impart. Shamans in the Amazon support this notion. So do modern subjects in controlled experiments with DMT, the psychoactive compound found in many hallucinogens, what some have termed the spirit molecule. As far-fetched as this view may seem, can we completely rule out that humanity may have been “lifted up” by an outside consciousness? To deny this possibility would be to fall into a form of subtle reductionism.

All of this begs the question, do robot dreams mark the beginning of stirrings in interior consciousness? What more is consciousness than an elaborately complex network of internal and external impressions? And is an inaccurate view of reality a necessary part of discovery?

*An interesting aside: An Australian scientist was blinded in an accident and told by doctors to give up “seeing,” as lacking real input, the images from his optic nerves would drive him crazy. He refused and found that though he is legally blind, he has been able to roof his own house and can, for example, watch tennis on TV. He still makes facial expressions when speaking because his visual cortex fills in a picture of what he thinks is there.

To learn how Dialog can help your business, contact us at 512.697.9425 or LetsChat@DialogGroup.com.

This article was originally published in design4emergence, a network science community sponsored and nurtured by Dialog and Panarchy, the world’s first network design firm.

 

REFERENCES

Nguyen A, Yosinski J, Clune J. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images,” Computer Vision and Pattern Recognition, CVPR ’15, IEEE (2015).

Itti L, Rees G, Tsotsos J, Anderson CH, Van Essen DC, Olshausen BA. “Directed visual attention and the dynamic control of information flow.” Neurobiology of Attention (2015).

sciences-next-big-experiment

Science isn’t a club. It’s a cultural activity, and it should be participatory. But if it were a club, these people would have made it a whole lot cooler.

“We are the kids who got in trouble in chemistry lab for setting the things on fire that we were not supposed to set on fire.”

That’s the official description of the people who made Experiment, a crowdfunding platform for scientific research. Crowdfunding and crowdsourcing have been a game-changer for many industries, including finance, humanitarian relief, startups, entertainment, and even the military, and the concept has has come to work some network magic on science. Science funding has traditionally been controlled by a few institutions and focused on their objectives. The network age, though, lends us the opportunity to widen those pathways to a greater number of scientists and a greater diversity of ideas.

On Experiment, you’ll find active research in rare diseases, dinosaur excavation, vitamins and eyesight, zombie ants (our favorite), and the scientific rigor you might expect from a couple of scientists who developed anthrax medicine for the Army.

Science tells us that innovation is path-dependent, as we learned from our interview with AI researchers Kenneth Stanley and Jeff Clune. A network that includes openness and diversity makes discovery and innovation more likely.

In The Competitive Advantage of Nations, Michael Porter introduces the idea of strategic clusters. (You know — Italy makes good shoes, the Valley spawns startups, etc). The gist of it is that innovation doesn’t just come from companies; it comes from ecosystems. The “stepping stones” to the next big thing arise from the surrounding network, often without a direct relationship to an objective. Experiment.com makes it possible for the stepping stones to the next big scientific breakthrough to come from untraditional channels.

design4emergence asked Experiment co-founder Cindy Wu about the path that led her and Denny Luan, undergrads at the time, to launch this curious startup experiment in 2012.

It was an ordinary summer for Cindy and Denny, resequencing proteins to fight anthrax…


d4e: How did you get the idea for Experiment?

Cindy Wu: Denny and I were last-year undergrads, and with a group of other students we had just designed an anthrax therapeutic for the Army. We used a crowdsourced video game where you can put up these proteins online and play this game to alter different parts to see if they come up with a new drug or probiotic. We made 87 different versions of that protein that summer. One was able to decapsulate the protective coat on the outside of anthrax bacteria.

The reason anthrax is so lethal is that when it enters your body, your body misidentifies it as safe, and so it spreads. But if you’re able to take off this protective coating, your body will recognize it as foreign and immune system will fight back.

We presented that research at the largest synthetic biology conference at MIT, and published the research, and the Army’s now doing follow up work on it. What we found is the same drug we created could also be used as an antibiotic for more generic bacterial infections in the hospital.

We needed like $5,000 to get that project started because we had all the techniques, we just needed to buy a few reagents. When I asked my professor where she could get grant money he just said, “Look Cindy. You’re an undergrad. You don’t have a PhD. The system just doesn’t fund people like you.”

So that’s when we decided we were just going to solve our own problem. If the government didn’t want to fund young scientists just because they didn’t have a PhD, then maybe the Internet could. We took a lot of inspiration from Kiva.org, which is a microfinance site. Denny had the idea of building a Kiva for science. We didn’t really know what that looked like, so we decided to just try it. We got nine of our professor and grad student friends to put up projects on the site. We funded six out of those first nine and never looked back.

d4e: What was the response from the scientific community?

CW: The majority of our users are professors and grad students at academic institutions, although we do allow anyone that has a research idea to propose projects on the site.

Over time, academics have become really interested because they think it’s a good way to fund really early stage research.

d4e: What types of experiments have you seen that wouldn’t be likely or possible elsewhere?

CW: There was a project where researchers tried to alter their vision to be able to see infrared. They haven’t published the results yet, but that’s something that probably wouldn’t get funded in the traditional realm.

There’s one experiment that actually uses the crowd to collect the data. He has ordered corn that is GMO and non-GMO. It looks identical. He’s sent it to all his backers, and his backers put it in their yard and see which corn the squirrels or other animals prefer. That part of crowdfunding and crowdsourcing is unique.

d4e: Is it usually research scientists carrying out the actual experiments?

CW: Most of the people proposing research on the platform are the ones actually carrying out the experiments, but we do have projects where people went through the literature and saw something they wanted to test, and then partnered with an institution to do the research.

For example there was a husband and wife team, and the wife found out she had a rare prion disease. Very little is known about prion diseases, but they found a compound that they wanted to test. Once they funded their research on Experiment, they applied to grad school, and now they’re both PhD students at Harvard Medical working on prion research.

One of the projects that’s raised the most on Experiment is run by a dad who found out both of his daughters had Batten disease. He did a literature search and found that there was one doctor in New Zealand who had treated the same type of Batten in sheep, so he’s replicating that study and using the rest of the funding for other types of gene therapy. I think we’re going to see a lot more research in rare diseases. This happened even before crowdfunding, where parents would take research into their own hands, and often they become experts in the field because they’ve read every paper that’s been published and talked to all the scientists.

d4e: What unmet demand is being served by Experiment?

CW: The most important thing that this allows scientists and avenue where they have full control over whether or not their research starts. In the traditional grant system you apply for a grant and maybe wait for a whole year before you figure out whether you get funding. With crowdfunding and Experiment, scientists put the idea up, get the money within 30 days and try it out, and if it works, run another campaign or use the preliminary data to go after a larger government grant. That was never the case in the past. The closest scenario we had before would be a faculty member going to a department head for some startup funds from the discretionary budget for some early stage research, but because research funding is drying up, it seems like those opportunities are dwindling.

d4e: What makes a successful Experiment?

CW: The most important thing is for the project to be well defined and for the researcher or whoever’s running the campaign to be very committed to the campaign, and to engage the community after it has run.

d4e: What’s new?

CW: The Journal of Results. People always wonder, once you fund a project, what do you get? The Journal was the first time we aggregated results from finished projects. That closes the loop on what is the reward on giving to science.

d4e: What are your goals for the network you’ve designed?

CW: We want to create a world where anyone can be a scientist. We want to be the first place people go when they have an idea for a scientific project, where they can share the results with everyone who has access to the internet.

I think the majority of the research will be executed by people in the public. And it should be, but it hasn’t been that way because to get funded by mainstream sources you have to have a PhD. Our network gives more access to more people (everywhere, including underserved communities and countries) who have the ideas that will push science forward.


So far, Experiment.com has rounded up over $5 million in funding for research and 19,000 backers, resulting in 20 published papers funded.

You never know where an idea will come from. No one knows this better than designers and our kindred spirits, scientists and inventors. The greatest leaps forward emerge as much from a network as from the genius of a single mind. Sometimes, the right objective for designing a network is discovery itself.

To learn how Dialog can help your business, contact us at 512.697.9425 or LetsChat@DialogGroup.com.

This article was originally published in design4emergence, a network science community sponsored and nurtured by Dialog and Panarchy, the world’s first network design firm.

dg_d4e_01_andershoff_600x400-002

Near a window, steam rises in spirals from a cup of tea. A column of sunlight intervenes, producing emergent ribbons of heat and ephemeral color. It’s an ordinary cup of tea, and steam is an ordinary product of nature’s processes. But to say that there is no art in nature is only a question of agency and scale.

This is a question Anders Hoff might ask. He is a search engine consultant and a mathematician in Oslo. He is also an artist, but a kind of artist that’s new on the scene, one with a different sort of medium–the kind you’ll start to see more of.

Anders is an artist of networks.

When he’s not spending six to nine hours a day solving complicated problems in search, he’s most likely listening to metal and using math to create generative artwork like this:

pic1inconvergent

It takes a few days to produce something like this. He writes a type of mathematical function, a generative algorithm, that defines the behavior of an agent-based data set. An agent is just something that acts on a network of things. When you’re creating an artificial network on a computer, you can define what you want the agents to be and do–to a point.

The agents are linked to other agents, and you can give them simple rules, but–like the simple rules in nature that create incredibly complex patterns–the agents are part of a network, and together they evolve the overall network in ways you can’t anticipate.

pic2inconvergent-300x300

In the case of this piece, which Anders describes as something akin to flower petals, or cabbage, or the inside of your intestines, the agents are the vertices of the mesh. You start with a simple triangular mesh and the vertices interact with each other in some way. Every vertex is connected to five or six other vertices. This creates the whole mesh–the network. These vertices act on simple rules that attract them to their neighbors and cause them to avoid the unconnected vertices of their mesh. In the end (if there is an end), it’s the complete network of all the vertices interacting that makes the mesh move or behave and evolve as it does.

He creates with a beginner’s mind.

“I like how you can get interesting structures arising from these simple rules,” Anders says, and it’s that simple curiosity that drives him to create algorithms that evolve into living artifacts that resemble familiar patterns in nature, analogs to biological mechanisms, like this:

pic3inconvergent

He doesn’t have an end result in mind. “I’ve tried to make it as naive as possible and still see if I can get that behavior. Nature doesn’t solve differential equations, but nature does evolve – so I want to make something as naive as possible.”

kelp

Sometimes he sets out to make a thing, and he makes exactly that one thing. But most times he starts with nothing other than a vague idea, perhaps an interesting mesh that someone else has made, and plays with it until he’s tired of it. That’s how this all started. When he was 16, Anders stumbled upon a site called Complexification, which was created by Jared Tarbell, one of Etsy’s cofounders. The site featured animation created by agent-based systems. Anders started playing with Tarbell’s algorithms, copying them at first and then making his own original designs.

Years later while studying physics and working on his master’s in numerical mathematics, Anders stumbled upon Complexification when he was supposed to be cramming for his finals, and out of curiosity and a bit of procrastination a whole portfolio of generative art emerged.

differential_9k

Back to our agency question. Where does the work of one network artist end and another’s begin? How does the artist know when the art is complete?

With art that grows, it’s difficult to say. When it feels complete, when it feels original, when it’s time to do something new.

For Anders, it’s a curiosity-driven process, the reward of creating simple behavior and producing something interesting. He puts his algorithms on github for other coders to play with and hopes someone will make something entirely different. Evolution is the propagation of novelty in nature, and even in art—where nature and the subconscious are exposed in complicity.

In an emerging medium like generative algorithmic art, you could think of the network itself as the artist. A network of artificial agents designed by a network of generative artists, evolving through each node, shown to other networks as art.

“The waking have one world in common, whereas each sleeper turns away to a private world of his own.” – Heraclitus

To learn how Dialog can help your business, contact us at 512.697.9425 or LetsChat@DialogGroup.com.

This article was originally published in design4emergence, a network science community sponsored and nurtured by Dialog and Panarchy, the world’s first network design firm.

“Mama always told me not to look into the eyes of the sun, but Mama that’s where the fun is.”

This line, from Bruce Springsteen – perhaps made famous by Manfred Mann – could well have been said by Toby Shannan, Shopify’s SVP of Support Operations.

You see, Shopify made its way from startup to a $5.2B valuation in 9 years by solving problems in the interstitial spaces between small business and e-commerce solutions, and between companies that don’t solve customer problems in “a hands on” way and those that do. The market has rewarded Shopify with hyper-growth for solving small business e-commerce headaches. That is the joy of integration – at least for Shopify.

The sorrow? All of Toby’s tech support issues are at the interface between different vendor solutions.  This is the burden of integration.

At Dialog’s recent network design symposium with Santa Fe Institute, Toby opened his seminar by quoting SFI’s Will Tracy as saying, “The edges are where the action is in a network.” By ‘edges’ he meant the links of a network. (That is one reason why networks are best thought of in terms of “flow” or the connections between. As It turns out that in all fields, connectivity is the main source of innovation. According to the recombinant DNA theory of innovation, the only way you can create something new is to bring together two previously uncombined elements.

In an open plug-and-play ecosystem, you can create virtually anything when compared to a closed ecosystem. However, this places a greater need on the role of integrator, whether that is the end user or a professional intermediary. In a closed ecosystem, the ecosystem sponsor takes on more of the task and decisions of integration. (Think Apple vs. Microsoft or Android).

All ecosystems, especially open ecosystems, require Integrators. Bridge builders. Translators. Renaissance men and women. That’s what we need more of as we race forward, ever-faster, pulled by our technology and self-reinforcing momentum, into deeper and more sprawling amounts of knowledge.

Specialization and exchange has created our world, but it will take renaissance men and women to keep it whole.

We need a unified worldview, right now. We can no longer afford brokenness. We can no longer afford to look at or manage problems in silos.

All silos are constructs. Organizational insiders can always tell you the informal network by which work really gets done. What is really there is a network, a series of nested ecosystems both formal and informal.

Toby manages his support and sales operations as one seamless function. In doing so, he avoids the usual escalated customer service issues that arise in the cracks between sales, customer service and tech support. People usually think of “product integration,” but “service integration” may be the secret of Shopify’s success.

In solving interstitial problems, Shopify’s team has found the same joy that Tim Cook found coming to Apple and working at the interstices of hardware, software and communications. In Tim Cook’s words, that’s where the magic is, at the boundaries.

There’s increasing business opportunity in connecting the network to itself.  With that in mind, the next time you see an integration problem, you just might see it as an opportunity.

And if you are lucky, like Shopify, it could offer you a 10 figure valuation.

Stay tuned for more insights, and join us in conversation online using the hashtag #NetworksInAction