In Praise of Generalists


Brain, computer art
Image via Wikipedia

In his article In Praise of Generalists, Anthony Townsend (Institute for the Future) touches on why the time for elevating  “specialists” has passed. We need what he calls “transdisciplinary thinking” to tackle the future, and ongoing conversations which include, but are not restricted to, specialists.

~~~~~~~~~~~~

In Praise of Generalists

The last decade has been witness to the rise of the geeks. What began as a glorification of tech entrepreneurs making it big from the rise of the IT industry, has now permeated every aspect of society. Single-minded obsession with obscure endeavors, hyper-specialization, and technical nerdery of all sorts are glorified across the board.

But is such geekery really a good way to foster talent? The most pressing problems in science and technology, and more broadly in business and the economy, don’t lend themselves readily to specialists’ solutions. They require not just inter-discipinary teamwork to make progress, but transdisciplinary thinking – literally, we need people that can have conversations between disciplinary appraoches to problems inside their own head. In fact, you could argue that most of the gridlock around big problems like global warming, health care, and so on, stem from the inability of narrow specialist and interest groups to speak each others’ language, translate heuristics and integrate complex concepts and data. They’re too specialized, having become more and more isolated in focused communities, thanks to the web.

Let’s take a classic example of a geek to unpack this dilemma. London taxi drivers are uber-geeks, memorizing the entire fractal street network of one of the world’s biggest cities. In fact, they are so specialized that scientists have measured distinct enlargement of a portion of the hippocampus in their brains. Yet another recent study has found that the widespread use of GPS technology for personal navigation is reducing the ability of everyday people to find their way at all. On the one hand, the super geeks who can DIY, on the other, lost sheep perpetually dependent on assistive technology.

Before you cry foul, and lament the loss of another basic human ability, let me ask you – are you lamenting the ability to do tell time from environmental cues (destroyed by clocks), to do complex mathematical calculations in your mind (destroyed by calculators), or to remember facts (destroyed by Google)? No, because each of these technologies, to which we’ve outsourced some basic functions, have allowed us to give up some geekery in order to spend our precious brain cycles on more broad, integrative thinking. (Of course, the more worrying part of the study, that atrophy of the hippocampus might be tied to dementia, should not be overlooked. But it’s a very preliminary finding)

I have alternated back and forth between geekery and generalism in my own career. I can say without a doubt, I’m happier and more productive, and more relevant, when I’m a generalist.

Reblog this post [with Zemanta]
Advertisements

Why swarming locusts grow giant brains


Close-up
Image via Wikipedia

One of my favorite blogs posted this recently. (If you do visit the original site – the comments are worth a scan too – who says geeks have no sense of humor?).  I did read though that a leap in the evolution of the human brain is attributed to the way we increasingly banded into groups.

At the risk of anthropomorphizing locusts, I can’t help but wonder what living in urban settings might be doing to our brains – any ideas? How about swarming social networks? I can’t stop thinking “mob mentality”, but then…

Credit to the author: Tim Barribeau

___________________________________________________________________________________________

When the conditions are just right, solitary grasshoppers undergo a terrifying transformation that converts them into masses of swarming locusts that destroy crops. New research reveals why swarming locusts grow much bigger brains than ordinary grasshoppers.

During times of scarcity, locusts default to a solitary form, actively avoiding others of their species. However, when rain comes and plants bloom, the insects undergo a dramatic conversion. It’s thought to be triggered by their legs bumping in to one another due to the increase in population density, and the grasshoppers shrink, change color, and behavior. They eat more, breed easily, and constantly pump serotonin into their body, which encourages the swarming.

So what happens to the brains of these insects when they so dramatically change? They significantly alter their behavior in order to survive as a swarm, which then has a dramatic effect on their brains. Researchers at the University of Cambridge compared the solitary and gregarious modes of the Desert Locust, and found intriguing alterations.

Even though in their swarming form the locusts are smaller than when they’re solitary, their brains are approximately 30% larger. With this transformation, the areas devoted to vision and smell decreased markedly, but there was a huge growth in the areas associated with learning and processing complex information.

In other words, their brains shift towards dealing with the intricacies of the swarm. Says Dr. Swidbert Ott:

Their bigger and profoundly different brains may help swarming locusts to survive in the cut-throat environment of a locust swarm. Who gets to the food first wins and if they don’t watch out, they themselves become food for other locusts. In a nutshell, you need to be brainier if you want to make it in the mayhem that is a locust swarm. As swarming locusts move through the landscape, they face much more of a challenge in finding and assessing potential foods, which may be something new that they have never encountered before.

The researchers hope this will provide more insight into the development and evolution of brains in response to social pressures and the environment.

via Proceedings of the Royal Academy B

Send an email to Tim Barribeau, the author of this post, at tim@io9.com.

Related articles by Zemanta

Reblog this post [with Zemanta]

Mistaken Prediction #7


When we try to predict the future, we often allow our assumptions to argue for our own limitations, sometimes at our peril. In this series of Mistaken Predictions, we deride predictions that close our minds to the future and celebrate our collective visions that allowed us to imagine alternative scenarios. Equipped with tools that open us to near limitless options, we cheer the fact that the future is inherently unpredictable.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

“Ours has been the first [expedition], and doubtless to be the last, to visit this profitless locality.”
~ Lt. Joseph Ives, after visiting the Grand Canyon in 1861.

Ives couldn’t have fathomed that more than a century later, five million people annually visit this “profitless locality,” by car, foot, air, and on the Colorado River itself.

John Wesley Powell, a one-armed Civil War veteran set off on a bold and pioneering expedition to fully map the Colorado as it wended its way through the Grand Canyon. The journey was a death-defying undertaking in the fragile wooden dories of the days. Less than a hundred miles into the trip, one of the expedition’s boats had already been smashed, taking with it much of their food supply. Yet, Powell and his team persevered for 99 days, putting the Grand Canyon on the map of the US for the first time.

Soon, Powell was being celebrated as a national hero. When President Theodore Roosevelt visited the Grand Canyon in 1903, he made the famous remark: “The ages have been at work on it and man can only mar it. What you can do is to keep it for your children, your children’s children and all who come after you.” In 1919, it would become a national Park.

Although John Wesley Powell started a cascade of interest in the the Grand Canyon, he himself was wary of development around it. He prophesied that water shortages would be a major issue if the population of the American West soared too high. Powell would soon turn out to be right. By the turn of the 20th Century, some of the world’s most colossal engineering projects were in motion to dam and divert the Colorado River to help quench the water needs of rapidly expanding Western cities.

The result is that today some 10 dams and 80 diversions have turned the Colordao into a vast plumbing works, the natural flow that John Wesley Powell witnessed now completely regulated to the point that the river’s mouth – once a vibrant wetlands at the Sea of Cortez – has run bone dry.

Before the huge Glen Canyon Dam was built in 1963, the river carried 5000, 000 tons of silt and sediment in a single day. now, 95% of those nutrient filled sediments are trapped by the dam. The river runs clear and cold, which makes it less friendly to life.  River otters, muskrats, native birds, lizards and frogs are rabidly disappearing.

Source: River at Risk

Organizations of interest:

Waterkeeper Alliance

Glen Canyon Institute – dedicated to restoring a healthy Colorado River

Mistaken Prediction #5


When we try to predict the future, we often allow our assumptions to argue for our own limitations, sometimes at our peril. In this series of Mistaken Predictions, we deride predictions that close our minds to the future and celebrate our collective visions that allowed us to imagine alternative scenarios. Equipped with tools that open us to near limitless options, we cheer the fact that the future is inherently unpredictable.

~~~~~~~~~~~~~~~~~~~~~

“Heavier-than-air flying machines are impossible.”

~ Lord Kelvin, 1895.

This was predicted by British mathematician and physicist, president of the British Royal Society Lord Kelvin only eight years before brothers Orville and Wilbur Wright took their home-built flyer to the sandy dunes of Kitty Hawk, cranked up the engine, and took off into the history books.

‘Nuff said.

Mistaken Prediction #3


When we try to predict the future, we often allow our assumptions to argue for our own limitations, sometimes at our peril. In this series of Mistaken Predictions, we deride predictions that close our minds to the future and celebrate our collective visions that allowed us to imagine alternative scenarios. Equipped with tools that open us to near limitless options, we cheer the fact that the future is inherently unpredictable.

~~~~~~~~~~~~~~~~~~~~~~~~~~

“A rocket will never be able to leave the Earth’s atmosphere.”

–New York Times, 1936.

The first rocket to leave the earth’s atmosphere  was American-built WAC, launched on March 22nd, from White Sands, NM and attained 50 miles of altitude.

“An Act to provide for research into the problems of flight within and outside the Earth’s atmosphere, and for other purposes.”

With this simple preamble, the Congress and the President of the United States created the national Aeronautics and Space Administration (NASA) on October 1, 1958. NASA’s birth was directly related to the pressures of national defense. After World War II, the United States and the Soviet Union were engaged in the Cold War, a broad contest over the ideologies and allegiances of the nonaligned nations. During this period, space exploration emerged as a major area of contest and became known as the space race.

A full-scale crisis resulted on October 4, 1957 when the Soviets launched Sputnik 1, the world’s first artificial satellite as its IGY entry. This had a “Pearl Harbor” effect on American public opinion, creating an illusion of a technological gap and provided the impetus for increased spending for aerospace endeavors, technical and scientific educational programs, and the chartering of new federal agencies to manage air and space research and development.

The United States launched its first Earth satellite on January 31, 1958, when Explorer 1 documented the existence of radiation zones encircling the Earth. Shaped by the Earth’s magnetic field, what came to be called the Van Allen Radiation Belt, these zones partially dictate the electrical charges in the atmosphere and the solar radiation that reaches Earth.

In 1957, Laika, the soviet space dog, became the first animal to orbit the Earth and, sadly, the first orbital death. On 12 April 1961, Yuri Gagarin became the first human in outer space.

Launched on July 16, 1969, the Apollo 11 was crewed by Commander Neil Alden Armstrong, Command Module Pilot Michael Collins, and Lunar Module Pilot Edwin Eugene ‘Buzz’ Aldrin, Jr. On July 20, Armstrong and Aldrin became the first humans to land on the Moon, while Collins orbited in the Command Module.

Rethinking artificial intelligence


David L. Chandler, MIT News Office


The field of artificial-intelligence research (AI), founded more than 50 years ago, seems to many researchers to have spent much of that time wandering in the wilderness, swapping hugely ambitious goals for a relatively modest set of actual accomplishments. Now, some of the pioneers of the field, joined by later generations of thinkers, are gearing up for a massive “do-over” of the whole idea.
This time, they are determined to get it right — and, with the advantages of hindsight, experience, the rapid growth of new technologies and insights from the new field of computational neuroscience, they think they have a good shot at it.

The new project, launched with an initial $5 million grant and a five-year timetable, is called the Mind Machine Project, or MMP, a loosely bound collaboration of about two dozen professors, researchers, students and postdocs. According to Neil Gershenfeld, one of the leaders of MMP and director of MIT’s Center for Bits and Atoms, one of the project’s goals is to create intelligent machines — “whatever that means.”

The project is “revisiting fundamental assumptions” in all of the areas encompassed by the field of AI, including the nature of the mind and of memory, and how intelligence can be manifested in physical form, says Gershenfeld, professor of media arts and sciences. “Essentially, we want to rewind to 30 years ago and revisit some ideas that had gotten frozen,” he says, adding that the new group hopes to correct “fundamental mistakes” made in AI research over the years.

The birth of AI as a concept and a field of study is generally dated to a conference in the summer of 1956, where the idea took off with projections of swift success. One of that meeting’s participants, Herbert Simon, predicted in the 1960s, “Machines will be capable, within 20 years, of doing any work a man can do.” Yet two decades beyond that horizon, that goal now seems to many to be as elusive as ever.

It is widely accepted that AI has failed to realize many of those lofty early promises. “Considering the outrageous optimism of much of the early hype for AI, it is no wonder that it couldn’t deliver. This is an occupational hazard of many new fields,” says Daniel Dennett, a professor of philosophy at Tufts University and co-director of the Center for Cognitive Science there. Still, he says, it hasn’t all been for nothing: “The reality is not dazzling, but still impressive, and many applications of AI that were deemed next-to-impossible in the ’80s are routine today,” including the automated systems that answer many phone inquiries using voice recognition.

Fixing what’s broken

Gershenfeld says he and his fellow MMP members “want to go back and fix what’s broken in the foundations of information technology.” He says that there are three specific areas — having to do with the mind, memory, and the body — where AI research has become stuck, and each of these will be addressed in specific ways by the new project

The first of these areas, he says, is the nature of the mind: “how do you model thought?” In AI research to date, he says, “what’s been missing is an ecology of models, a system that can solve problems in many ways,” as the mind does.

Part of this difficulty comes from the very nature of the human mind, evolved over billions of years as a complex mix of different functions and systems. “The pieces are very disparate; they’re not necessarily built in a compatible way,” Gershenfeld says. “There’s a similar pattern in AI research. There are lots of pieces that work well to solve some particular problem, and people have tried to fit everything into one of these.” Instead, he says, what’s needed are ways to “make systems made up of lots of pieces” that work together like the different elements of the mind. “Instead of searching for silver bullets, we’re looking at a range of models, trying to integrate them and aggregate them,” he says.

The second area of focus is memory. Much work in AI has tried to impose an artificial consistency of systems and rules on the messy, complex nature of human thought and memory. “It’s now possible to accumulate the whole life experience of a person, and then reason using these data sets which are full of ambiguities and inconsistencies. That’s how we function — we don’t reason with precise truths,” he says. Computers need to learn “ways to reason that work with, rather than avoid, ambiguity and inconsistency.”

And the third focus of the new research has to do with what they describe as “body”: “Computer science and physical science diverged decades ago,” Gershenfeld says. Computers are programmed by writing a sequence of lines of code, but “the mind doesn’t work that way. In the mind, everything happens everywhere all the time.” A new approach to programming, called RALA (for reconfigurable asynchronous logic automata) attempts to “re-implement all of computer science on a base that looks like physics,” he says, representing computations “in a way that has physical units of time and space, so the description of the system aligns with the system it represents.” This could lead to making computers that “run with the fine-grained parallelism the brain uses,” he says.

MMP group members span five generations of artificial-intelligence research, Gershenfeld says. Representing the first generation is Marvin Minsky, professor of media arts and sciences and computer science and engineering emeritus, who has been a leader in the field since its inception. Ford Professor of Engineering Patrick Winston of the Computer Science and Artificial Intelligence Laboratory is one of the second-generation researchers, and Gershenfeld himself represents the third generation. Ed Boyden, a Media Lab assistant professor and leader of the Synthetic Neurobiology Group, was a student of Gershenfeld and thus represents the fourth generation. And the fifth generation includes David Dalrymple, one of the youngest students ever at MIT, where he started graduate school at the age of 14, and Peter Schmidt-Nielsen, a home-schooled prodigy who, though he never took a computer science class, at 15 is taking a leading role in developing design tools for the new software.

The MMP project is led by Newton Howard, who came to MIT to head this project from a background in government and industry computer research and cognitive science. The project is being funded by the Make a Mind Company, whose chairman is Richard Wirt, an Intel Senior Fellow.

“To our knowledge, this is the first collaboration of its kind,” Boyden says. Referring to the new group’s initial planning meetings over the summer, he says “what’s unique about everybody in that room is that they really think big; they’re not afraid to tackle the big problems, the big questions.”

The big picture

Harvard (and former MIT) cognitive psychologist Steven Pinker says that it’s that kind of big picture thinking that has been sorely lacking in AI research in recent years. Since the 1980s, he says “there was far more focus on getting software products to market, regardless of whether they instantiated interesting principles of intelligent systems that could also illuminate the human mind. This was a real shame, in my mind, because cognitive psychologists (my people) are largely atheoretical lab nerds, linguists are narrowly focused on their own theoretical paradigms, and philosophers of mind are largely uninterested in mechanism.

“The fading of theoretical AI has led to a paucity of theory in the sciences of mind,” Pinker says. “I hope that this new movement brings it back.”

Boyden agrees that the time is ripe for revisiting these big questions, because there have been so many advances in the various fields that contribute to artificial intelligence. “Certainly the ability to image the neurological system and to perturb the neurological system has made great advances in the last few years. And computers have advanced so much — there are supercomputers for a few thousand dollars now that can do a trillion operations per second.”

Minsky, one of the pioneering researchers from AI’s early days, sees real hope for important contributions this time around. Decades ago, the computer visionary Alan Turing famously proposed a simple test — now known as the Turing Test — to determine whether a machine could be said to be truly intelligent: If a person communicating via computer terminal could carry on a conversation with a machine but couldn’t tell whether or not it was a person, then the machine could be deemed intelligent. But annual “Turing test” competitions have still not produced a machine that can convincingly pass for human.

Now, Minsky proposes a different test that would determine when machines have reached a level of sophistication that could begin to be truly useful: whether the machine can read a simple children’s book, understand what the story is about, and explain it in its own words or ask reasonable questions about it.

It’s not clear whether that’s an achievable goal on this kind of timescale, but Gershenfeld says, “We need good challenging projects that force us to bring our program together.”

One of the projects being developed by the group is a form of assistive technology they call a brain co-processor. This system, also referred to as a cognitive assistive system, would initially be aimed at people suffering from cognitive disorders such as Alzheimer’s disease. The concept is that it would monitor people’s activities and brain functions, determine when they needed help, and provide exactly the right bit of helpful information — for example, the name of a person who just entered the room, and information about when the patient last saw that person — at just the right time.

The same kind of system, members of the group suggest, could also find applications for people without any disability, as a form of brain augmentation — a way to enhance their own abilities, for example by making everything from personal databases of information to all the resources of the internet instantly available just when it’s needed. The idea is to make the device as non-invasive and unobtrusive as possible — perhaps something people would simply slip on like a pair of headphones.

Boyden suggests that the project’s initial five-year timeframe seems about right. “It’s long enough that people can take risks and try really adventurous ideas,” he says, “but not so long that we won’t get anywhere.” It’s a short enough span to produce “a useful kind of pressure,” he says. Among the concepts the group may explore are concepts for “intelligent,” adaptive books and games — or, as Gershenfeld suggests, “books that think.”

In the longer run, Minsky still sees hope for far grander goals. For example, he points to the fact that his iPhone can now download thousands of different applications, instantly allowing it to perform new functions. Why not do the same with the brain? “I would like to be able to download the ability to juggle,” he says. “There’s nothing more boring than learning to juggle.”

Reblog this post [with Zemanta]
%d bloggers like this: