Things are changing. That's no surprise; our universe is not a static place. Just in the course of my life, we've seen the creation of the commercial Internet, Space Shuttles, Mars rovers, the sequencing of the human genome, personal genomes on-demand, a vaccine that can prevent some forms of cancer, and much, much more. The future holds even greater promise.
Not only that, but people are living longer (that's one more thing science has done). I will likely live longer than my parents (though not by much). My son, though, his generation will likely live to be 120-150 years old, and they'll live most of their lives healthy, if groups like the Methuselah Foundation have anything to say about it.
So what changes might I see in my lifetime? What changes will you see in yours? What changes will my son see in his?
That's the question that the Edge's World Question Center wants to know: What game-changing scientific ideas and developments do you expect to live to see?
They asked that question of a sizeable number of eminent thinkers in a variety of fields and, naturally, they got a variety of answers. Interestingly, in addition to such luminaries as Gregory Benford, Robert Shapiro, Laurence Krauss, and Aubrey de Grey, they also have input from the likes of Alan Alda and Brian Eno.
Some of the posts are really insightful; some less so. But it got me wondering. What if more than one of their suggestions are correct? It's one thing to talk about advanced artificial general intelligences, or molecular-scale manufacturing, or synthetic biology. But what if we're talking about all of those things, at roughly the same time? It seems unlikely (barring the AI causing a Singularity and creating the other advances). But what kind of world might we live in if advanced AIs could create anything they wanted, including living organisms, at the molecular level?
There's a lot of promise, but also a lot of risk and questions. Read the answers on the Edge's site, but while you're doing so, keep in mind the risks of some of these predictions coming true. And, if it scares you a little bit, take a trip over to the Lifeboat Foundation website.
Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts
Friday, January 2, 2009
Monday, May 19, 2008
Are You Sure That Avatar You're Talking to in Second Life is a Person?
From the Associated Press comes an article yesterday about how researchers at Rensselaer Polytech have created an artificial intelligence that can operate a Second Life avatar. The avatar even has a name, Edd Hifeng.
Edd has a limited ability to converse, but what really makes this AI entity interesting is its ability to make inferences. In one example, Edd witnessed a different avatar switching a gun from one briefcase to another. Edd was able to infer that another avatar not currently in the room would believe the gun to still be in the first briefcase.
It may seem fairly simple to you and me, but this ability to make inferences has long been a weakness of artificial intelligence. Bridging this gap is a big step toward creating artificially intelligent entities.
I'm all in favor of this type of research into artificial intelligence, as long as it can be done responsibly. As a software developer myself, I do have some concerns about the days when computers become smarter than us. The Matrix and the Terminator are entertaining (but not very realistic) scenarios about what could happen if our creations decide that we have become obsolete. So while this type of research is continuing to advance, I'm going to be paying close attention to it and stopping by occasionally to check out how the Lifeboat Foundation's AIShield project is coming along.
Edd has a limited ability to converse, but what really makes this AI entity interesting is its ability to make inferences. In one example, Edd witnessed a different avatar switching a gun from one briefcase to another. Edd was able to infer that another avatar not currently in the room would believe the gun to still be in the first briefcase.
It may seem fairly simple to you and me, but this ability to make inferences has long been a weakness of artificial intelligence. Bridging this gap is a big step toward creating artificially intelligent entities.
I'm all in favor of this type of research into artificial intelligence, as long as it can be done responsibly. As a software developer myself, I do have some concerns about the days when computers become smarter than us. The Matrix and the Terminator are entertaining (but not very realistic) scenarios about what could happen if our creations decide that we have become obsolete. So while this type of research is continuing to advance, I'm going to be paying close attention to it and stopping by occasionally to check out how the Lifeboat Foundation's AIShield project is coming along.
Thursday, April 5, 2007
MIT Researchers Teach Computer to See Like a Person
A team of researchers at the Massachusetts Institute of Technology have created a computer visualization model that mimics the way humans see and interpret images. The computer model, designed to mimic the way the brain itself processes visual information, performs as well as humans do on rapid categorization tasks. The model even tends to make similar errors as humans, possibly because it so closely follows the organization of the brain’s visual system.
This new study supports a long–held hypothesis that rapid categorization happens without any feedback from cognitive or other areas of the brain. The results also indicate that the model can help neuroscientists make predictions and drive new experiments to explore brain mechanisms involved in human visual perception, cognition, and behavior. Deciphering the relative contribution of feed-forward and feedback processing may eventually help explain neuropsychological disorders such as autism and schizophrenia. The model also bridges the gap between the world of artificial intelligence (AI) and neuroscience because it may lead to better artificial vision systems and augmented sensory prostheses.
Importantly, the results showed no significant difference between humans and the model. Both had a similar pattern of performance, with well above 90% accuracy for the close views dropping to 74% for distant views. The 16% drop in performance for distant views represents a limitation of the one feed-forward sweep in dealing with clutter. Still, the researchers caustion that "We have not solved vision yet." With more time for cognitive feedback, people would outperform the model because they could focus attention on the target and ignore the clutter.
This model is a big step forward toward a real artificial intelligence (is that an oxymoron?). Computers that use a visual model to interpret what they are seeing could allow for computer systems that perform many of the tasks that are restricted to people today.
This new study supports a long–held hypothesis that rapid categorization happens without any feedback from cognitive or other areas of the brain. The results also indicate that the model can help neuroscientists make predictions and drive new experiments to explore brain mechanisms involved in human visual perception, cognition, and behavior. Deciphering the relative contribution of feed-forward and feedback processing may eventually help explain neuropsychological disorders such as autism and schizophrenia. The model also bridges the gap between the world of artificial intelligence (AI) and neuroscience because it may lead to better artificial vision systems and augmented sensory prostheses.
Importantly, the results showed no significant difference between humans and the model. Both had a similar pattern of performance, with well above 90% accuracy for the close views dropping to 74% for distant views. The 16% drop in performance for distant views represents a limitation of the one feed-forward sweep in dealing with clutter. Still, the researchers caustion that "We have not solved vision yet." With more time for cognitive feedback, people would outperform the model because they could focus attention on the target and ignore the clutter.
This model is a big step forward toward a real artificial intelligence (is that an oxymoron?). Computers that use a visual model to interpret what they are seeing could allow for computer systems that perform many of the tasks that are restricted to people today.
Tuesday, January 23, 2007
Grand Challenges
The National Academy of Engineering (NAE) has officially launched what they call a "worldwide brainstorming session" called Grand Challenges for Engineering. The purpose is to identify the major engineering challenges to be addressed during the 21st Century.
The project grew out of a brainstorming session to determine the greatest, highest-impact engineering feats of the 20th Century. Anyone can submit ideas (this means you) to the list, and many people already have. The list will be reviewed by a panel of experts including J. Craig Venter, Larry Page, Dean Kamen, Ray Kurzweil, and William Perry, among others. If you don't know who these people are, you should... go to the website and read their bios.
I haven't submitted my ideas to the list yet, but I will. First, I'm going to list some thoughts here:
These are just some of the (many) ideas I have for engineering challenges to be addressed in the 21st Century. In a way, it makes me sad that I'm not an engineer.
The project grew out of a brainstorming session to determine the greatest, highest-impact engineering feats of the 20th Century. Anyone can submit ideas (this means you) to the list, and many people already have. The list will be reviewed by a panel of experts including J. Craig Venter, Larry Page, Dean Kamen, Ray Kurzweil, and William Perry, among others. If you don't know who these people are, you should... go to the website and read their bios.
I haven't submitted my ideas to the list yet, but I will. First, I'm going to list some thoughts here:
- Mind-Machine Interface - Our knowledge and understanding of the human brain and the human mind have advanced more in the past 15 years than they had in all the time leading up to that time. We now have systems that can detect a person's thought patterns and behave according to a prescribed set of rules, systems that have allowed paralyzed people to operate machinery. Improvements in this technology will result in true cybernetics, replacement limbs, paralysis cures, and eventually devices that help the blind to see and the deaf to hear.
- Low-Cost Orbital Access - And by "low-cost" I mean around the current price of an airline ticket. Rockets are never going to reach that pricing level, and it's time to stop pretending that they will. There are, however, some means that will work. A space elevator, while massively expensive to design and build, would lower cost-to-orbit dramatically. And gravity control, while firmly in the realm of science fiction for now, would be an enabler of so many things I can't even list them in this post. The hurdles in both cases are mainly engineering challenges (though in the case of gravity control, there is some basic science yet to be done), and they are hurdles that can be overcome.
- Anti-Senescence - There are a limited number of causes of cell death, and we are close to understanding many of them. The challenges remaining are in both the realms of science and engineering, but they are no insurmountable. Understanding and being able to control cell death could lead to cures for cancer, Alzheimer's disease and many other diseases as well as rejuvenation therapies. Some people believe it may even be possible to eliminate aging as a cause of death.
- Clean, Reliable Power Generation - Most of our current means of generating electricity are destructive--coal, natural gas, and oil all create pollution in various amounts, and nuclear energy leaves us with large amounts of waste that will take eons to decay. Only renewable, non-polluting sources such as wind, solar, and geothermal will ease our energy demands without irreparable damaging the planet we live on. Solar power satellites beaming power as microwaves to ground receiver stations, supplemented by huge geothermal projects, could supply all of the energy we need to grow in the 21st Century.
- Asteroid Mining - Earth has a finite number of resources, and they're difficult to get to. A typical asteroid, meanwhile, has trillions of (2007) dollars worth of precious metals, and we could mine them without polluting our water sources here on Earth. The first organization that does so will truly open up the space market by making massive profits and will create a "gold rush" in space.
- Artificial Intelligence - True artificial intelligence is not that far away (although it's also not as close as some people would like to believe). There will be varying levels of it, ranging from slow-thinking, distributed neural network-based systems to very limited, task-specific (but portable) devices to handle your day-to-day chores, such as driving. Autonomous vehicles would virtually remove the human-error element from automobile and airplane travel, surgery, and commerce. Artificial intelligence will be used (even in the near term) to aid in product design, by means of evolving designs using genetic algorithms, allowing the rapid design of improved products.
These are just some of the (many) ideas I have for engineering challenges to be addressed in the 21st Century. In a way, it makes me sad that I'm not an engineer.
Subscribe to:
Posts (Atom)