Photo by Tyler KendallWASHINGTON – For years, pop culture has offered a wide range of notions about robots. When we envision the metal creatures, our ideas can be placed on a scale ranging from Rosie, the harmless robotic maid in the “Jetsons” cartoon, to the cyborg assassin in the “Terminator.”

But which vision of robots is accurate? Or more importantly, which vision should we prepare for?

A panel held Wednesday by Future Tense, a non-partisan partnership among the New America Foundation, Arizona State University and Slate magazine, discussed the evolution of robotic technology and the increasingly symbiotic relationship between robots and humans.

One of the panelists, Patric Verrone, writer and producer for the science fiction sitcom “Futurama,” said he expected “what we’re going to see in the next 100 years is an evolution in our own thinking about machinery.”

“That’s going to change the social fabric in terms of our relationship with particularly humanoid creatures,” Verrone added. “I think we’re going to go from a Frankenstein era where we fear them to the Stepford Wives idea where we have a general idea of falling in love with them.”

A recurring theme highlighted by the panel: robots, functionally, will likely  evolve to mirror their creators — humans.

Woodrow Hartzog, assistant professor at Samford University, said that that in human relationships “you have to earn intimacy,” versus a robot who can be programmed to have that connection installed.

“But maybe that’s something we seek to preserve,” Hartzog said. “That’s usually where we have the disconnect with humanoid robots.”

The panel discussed the concept of free will — and whether robots can develop human tendencies on their own. Verrone described free will as “a conundrum for a futurist,” deciding if a robot should have the ability to evolve on its own or not.

Lance Gharavi, associate professor at Arizona State University, brought up the concept of the “moral responsibility” humans will one day have to robots. He argued that if “they have desires, they can suffer, and if they can suffer we are responsible to them.”  In such circumstances, he said humans would have to start considering the status of robots in a moral universe.

The panel also analyzed recent technology developments such as Jibo, marketed as “the world’s first social robot” and Hello Barbie, a new wifi-enabled doll that would transmit information to parents when their children are playing. The discussion centered on the importance of design decisions, such as highlighting how Jibo being stationary – it can talk but can’t move around — was a conscious choice for privacy by the programmers.

The robot “DARwIn-OP Robotis” from Lockheed Martin Advanced Technologies made an appearance after the discussion.  About one foot tall, the waving, dancing machine with glowing blue eyes is designed to have “theory of mind,” which is the ability to make inferences about other people’s beliefs, desires and attitudes.

“One of the things we’re designing robots to do is partner well with people,” said William Casebeer, 46, research area manager of human systems and autonomy for Lockheed Martin. “We want robots to be teammates for us.”

However, despite these humanoid robot developments, technology has yet to put robots in a superior role to humans.

“Humans, whatever our faults, have an enormous variety of skills needed by humans,” Gharavi said. “Robots so far are very narrow specialists.”

The developing emotional attachments humans have to robots though, according to Senior Editor of the New Atlantis Christine Rosen, “will make the way we think about our cellphones look like puppy love.”