I'm fascinated by the nature of small-world networks and how adding a little randomness can dramatically change the nature of connectedness. Our neural networks are structured a bit like this. I wanted to explore this on my own, and so wrote a little program that allows one to control how much randomness is put into a clustered network of links. (My approach is based on the beta model of Watts and Strogatz.)
Clusters are like social cliques where most members of a clique link only to each other. It takes a lot of steps to move across a tightly clustered network. The program assumes a network or graph of 100 nodes, and each node can have two or four links or vertices. (The goal is to start at link #0 and get to the farthest from zero, link #50.)
By adding just a few random links (think of it metaphorically as adding diversity) it takes far fewer steps to move across the network. As the amount of randomess goes up linearly the number of steps needed to traverse the network goes down dramatically (and non-linearly).
To explore small-world networks, change the % value in the text-field at the bottom of the screen. The network rapidly becomes as connnected as a graph where all links are random (which is extremely connected, but not very structured). Even 5% of randomness (the default here) is amazingly connected. Keep clicking in the check-boxes to get new networks. (I know it's a bit odd to use check-boxes as buttons, I'll get around to changing it someday.)
A variation on small-world networks is a scale-free network, (the www is a good example). Here three nodes (A, B, and C) are linked to each other. (This is based on the scale-free model of Barabási and Albert.) The other nodes (sequentially from 0 to 99) are linked to the nodes that are statisically the most popular. "The rich get richer." Then the network is traversed from a node picked at random to the node that is farthest in numeric distance (50 steps).
suggested reading (books)