Information

Will a dog develop similar feelings with a robot as like human?

Will a dog develop similar feelings with a robot as like human?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I think we are connected to the world by Biochemistry first and then physical and verbal interactions. If a robot feeds a dog regularly, pampers it and cuddles it regularly, would it develop the same feeling to the dog as it would have developed with a human? A sci-fi robot like Ultron from Avenger movie.


Assuming you mean a robot "designed to be human" (that means human-like behavior patterns, human-comparable intelligence, being self-aware and able to learn), then from the dog's perspective, the only significant difference is the differing smell.

There is no scientific evidence for mystical or metaphysical "connections to the world".

From the dog's perspective, the defining factors are:

Owner's behavior (petting, feeding, walking, training, rewards for good behavior or learning tricks). In this case, the robot is no different from the human.

Owner's speech (voiceprint). There's evidence that dogs understand human speech to some extent (https://www.independent.co.uk/news/science/dogs-can-understand-human-speech-scientists-say-a7216481.html). Given how adaptable organic brains are, I assume a dog would be able to adapt to even a synthetic (digitally altered) voiceprint over time or one that spoke in a conspicuously robotic manner (for example: "Temperature alert. Please fetch an external cooling fan./Your assistance above your designated specifications continues to be noted." instead of "It's getting hot in here. Grab a soda from the fridge over there?/Good dog!"

Owner's scent. The only real difference from a dog's perspective. A robot would lack the distinctive scent cues from a human, except if the robot was designed to produce those as well. Even if those aren't present, a dog should be able to adapt to a robotic owner's distinctive smell (plastic with motor oil and a hint of ozone).


What Interacting With Robots Might Reveal About Human Nature

The most urgent question for people is not whether machines will take their jobs, but how machines will change the way they behave in society.

Robot panic seems to move in cycles, as new innovations in technology drive fear about machines that will take over our jobs, our lives, and our society—only to collapse as it becomes clear just how far away such omnipotent robots are. Today’s robots can barely walk effectively, much less conquer civilization.

But that doesn’t mean there aren’t good reasons to be nervous. The more pressing problem today is not what robots can do our bodies and livelihoods, but what they will do to our brains.

“The problem is not that if we teach robots to kick they’ll kick our ass,” Kate Darling, an MIT robot ethicist, said Thursday at the Aspen Ideas Festival, which is co-hosted by the Aspen Institute and The Atlantic. “We have to figure out what happens to us if we kick the robots.”

That’s not just a metaphor. Two years ago, Boston Dynamics released a video showing employees kicking a dog-like robot named Spot. The idea was to show that the machine could regain its balance when knocked askew. But that wasn’t the message many viewers took. Instead, they were horrified by what resembled animal cruelty. PETA even weighed in, saying that “PETA deals with actual animal abuse every day, so we won’t lose sleep over this incident,” but adding that “most reasonable people find even the idea of such violence inappropriate.”

The Spot incident, along with the outpouring of grief for the “Hitchbot”—a friendly android that asked people to carry it around the world, but met an untimely demise in Philadelphia—show the strange ways humans seem to associate with robots. Darling reeled off a series of other ways: People name their Roombas, and feel pity for it when it gets stuck under furniture. They are reluctant to touch the “private areas” of robots, even only vaguely humanoid ones. Robots have been shown to be more effective in helping weight loss than traditional methods, because of the social interaction involved.

People are more forgiving of robots’ flaws when they are given human names, and a Japanese manufacturer has its robots “stretch” with human workers to encourage the employees to think of the machines as colleagues rather than tools. Even when robots don’t have human features, people develop affection toward them. This phenomenon has manifested in soldiers bonding with bomb-dismantling robots that are neither anthropomorphic nor autonomous: The soldiers take care of them and repair them as though they were pets.

“We treat them as though they’re alive even though we know perfectly well they’re machines,” Darling said.

That can be good news—whether it’s as weight-loss coaches or therapy aides for autistic children—but it also opens up unexplored ethical territory. Human empathy is a volatile, unpredictable force, and if it can be manipulated for good, it can be manipulated for bad as well. Might people share sensitive personal information or data more readily with a robot they perceive as partly human than they would ever be willing to share with a “mere” computer?

Social scientists (and anxious parents) have wondered for years about the effect of violent video games on children and adults alike. Even as those questions remain unresolved, an increasing number of interactions with robots will create their own version of that debate. Could kicking a robot like Spot desensitize people, and make them more likely to kick their (real) dogs at home? Or, could the opportunity to visit violence on robots provide an outlet to divert dangerous behaviors? (Call it the Westworld hypothesis.)

An even more pungent version of that dilemma could revolve around child-size sex robots. Would such a thing provide a useful outlet for sex offenders, or would it simply make pedophilia seem more acceptable? Making the dilemma more challenging, it’s extremely difficult to research that question.

The sway that even rudimentary robots can hold over humans was clear near the end of Darling’s talk. A short robot whirred out on stage to alert her that she had five minutes left to speak. The audience, which had just listened to a thoughtful, in-depth litany of the ethical challenges of human-robot interactions, cooed involuntarily at the cute little machine. And Darling, who had just delivered the litany, knelt down to pat its head.


Bruisable artificial skin could help prosthetics, robots sense injuries

An artificial skin attached to a person’s knee develops a purple “bruise” when hit forcefully against a metal cabinet. Credit: Adapted from ACS Applied Materials & Interfaces

When someone bumps their elbow against a wall, they not only feel pain but also might experience bruising. Robots and prosthetic limbs don't have these warning signs, which could lead to further injury. Now, researchers reporting in ACS Applied Materials & Interfaces have developed an artificial skin that senses force through ionic signals and also changes color from yellow to a bruise-like purple, providing a visual cue that damage has occurred.

Scientists have developed many different types of electronic skins, or e-skins, that can sense stimuli through electron transmission. However, these electrical conductors are not always biocompatible, which could limit their use in some types of prosthetics. In contrast, ionic skins, or I-skins, use ions as charge carriers, similar to human skin. These ionically conductive hydrogels have superior transparency, stretchability and biocompatibility compared with e-skins. Qi Zhang, Shiping Zhu and colleagues wanted to develop an I-skin that, in addition to registering changes in electrical signal with an applied force, could also change color to mimic human bruising.

The researchers made an ionic organohydrogel that contained a molecule, called spiropyran, that changes color from pale yellow to bluish-purple under mechanical stress. In testing, the gel showed changes in color and electrical conductivity when stretched or compressed, and the purple color remained for 2–5 hours before fading back to yellow. Then, the team taped the I-skin to different body parts of volunteers, such as the finger, hand and knee. Bending or stretching caused a change in the electrical signal but not bruising, just like human skin. However, forceful and repeated pressing, hitting and pinching produced a color change. The I-skin, which responds like human skin in terms of electrical and optical signaling, opens up new opportunities for detecting damage in prosthetic devices and robotics, the researchers say.

"Colorimetric Ionic Organohydrogels Mimicking Human Skin for Mechanical Stimuli Sensing and Injury Visualization" is published in ACS Applied Materials & Interfaces.


In The Minds of Dogs

By Stanley Coren PhD., DSc, FRSC, Rosalind Arden Ph.D., Marc Bekoff Ph.D., Hara Estroff Marano and John Bradshaw Ph.D. published September 5, 2017 - last reviewed on September 13, 2017

Conversing with Canines

Think Cat in the Hat for talking to dogs.

If you want to start an argument among psychologists, behavioral biologists, and next-door-neighbor dog owners, just ask the question: Do dogs understand and use language? The argument tends to focus on whether dogs understand the words and expressions that humans use. A related concern is whether dogs use their various barks, growls, whines, and whimpers, combined with tail wags, body postures, and ear positions, to communicate with people as well as with one another.

Some scientists argue that dogs are more attuned to the emotional aspects of our word sounds than their actual meaning, and that their own signals are just visible expressions of their emotional state. Accordingly, any information such signals communicate about a dog and its intentions is just a byproduct, and those signals provide no more evidence of language ability than does our capacity to understand that other humans are happy because they're smiling or are angry because they're scowling.

With the right tools, it's possible to explore what dogs are capable of cognitively. The study of animal cognition in general, and dog cognition in particular, is now a growth industry.

In the early 1990s, it dawned on me that one of the ways to learn whether dogs actually have language was to deploy tests already developed for assessing human children—and simply modify them for use with dogs. I borrowed the MacArthur Communicative Development Inventory, which assesses language ability in very young children in terms not only of words but also gestures. When someone points a finger and we know that they are trying to communicate the location of something of interest, that's a linguistic gesture. An individual demonstrates an understanding of such an elementary message by looking or moving in that direction.

My data led to the conclusion that the average dog can learn to recognize about 165 words and gestures. "Super dogs"—those in the top 20 percent of canine intelligence—can learn 250 or more.

Other scientists soon tested my predictions. One study showed that a border collie named Rico is able to recognize more than 200 words. Perhaps the most linguistically advanced dog so far is another border collie, named Chaser. She is owned by a retired psychologist, John Pilley, and her vocabulary is around 1,000 words. What's more, Chaser understands some of the basics of grammar involved in simple sentence construction and seems to infer intention.

Evidence from testing dogs suggests that language is not an ability possessed only by humans. The knowledge that dogs have basic language skills offers further insight into the canine mind. The test scores I recorded allowed me to assign each dog a mental age representing the animal's cognitive ability. Dogs have a mental ability roughly equivalent to a human toddler age 2 to 2-and-a-half. Super dogs like Chaser have minds that might be similar to that of a 3-year-old child.

Tests of canine language ability offer a new way of looking at dogs' mental skills. If a problem can't be solved by a 2- to 3-year-old child, then it is not likely that a dog can solve it either. And if a training technique won't work for a toddler, then it likely won't work for a dog. —Stanley Coren

Stanley Coren, PH.D., is a professor emeritus of psychology at the University of British Columbia, Canada, whose research has focused on human cognition as well as dog intelligence. His latest book is Gods, Ghosts, and Black Dogs: The Fascinating Folklore and Mythology of Dogs.

Taking the IQ Test

One of the hardest tricks is coming up with a way to measure dog intelligence.

Humans have language and are mostly willing to follow the sonorous imperative,"You may now turn over your test papers." Still, it took a while to develop reliable IQ exam questions. In species less amenable to turning over—and more given to eating—said papers, the task of creating reliable test items is significantly harder.

Last year, Mark J. Adams and I published a study of 68 border collies to whom a set of six tests had been administered. We wanted to know whether dogs' cognitive abilities "hang together" the way they do in people. Four of the tests were related (from a human perspective). They comprised various barriers around which each dog had to navigate to find food. A fifth test ascertained the dogs' capacity to discriminate between quantities (to choose the bigger or smaller snack). A final test assessed their ability to understand and respond to a human gesture, specifically, a pointing arm directed at one of two inverted beakers, each covering a food reward.

We found a tendency for dogs who were better at one task to be better at others, and dogs who were faster were also more accurate. Three correlated elements—detour time, choice time, and choice accuracy—provided evidence that in dogs, as with people, cognitive abilities are associated with each other at the trait level. And as with humans, there appears to be an underlying factor exerting general influence on cognitive processes—a canine general IQ, or g-factor. The bottom line: Some dogs are smarter than others. This may sound obvious, but it has to be established empirically.

How does training fit in? In our sample, all the dogs were working farm dogs, so they had received similar training exposure. But training does not make all dogs alike. As with humans, brighter dogs learn new tricks faster. Famous dogs like Betsy, who could pick up a new word after two exposures, have had countless hours of training lavished on them, but it seems likely they were all smart dogs to begin with.

As well as being smart, a highly trainable dog must be biddable. Personality and test performance are not easy to decouple with dogs because, like Bartleby the Scrivener, a dog who would prefer not to simply does not. Such recalcitrance is somewhat awkward for psychometricians working with dogs (and with nonhuman animals more generally). It would be nice to be able to cleave cleanly between intelligence and other aspects of canine behavior, such as motivation and obedience. Yet we cannot hang over their heads the threat of tanking on an SAT. We have to go with food bribes instead.

What are the properties of an ideal test item for assessing canine intelligence? All dogs should be able to do it to some extent it must reflect mental ability, not motor skills or training it should have a graded outcome that is not simply pass or fail. But we should crack measuring dog intelligence because dogs are a great model for learning how cognitive abilities are associated with constellations of traits such as health, dementia risk, lifespan, and biological fitness. Among humans, for example, intelligence predicts health. Since dogs' outcomes are not subject to influence from the big hitters of epidemiology—smoking, alcohol, and drug abuse—they are terrific animals to partner with. In addition, their propensity to acquire naturally some of the same diseases that we suffer from (including dementia) makes learning about dog cognition a research priority. —Rosalind Arden

Rosalind Arden, Ph.D., is a research associate at the Centre for Philosophy of Natural and Social Science at the London School of Economics & Political Science.

Mutt Morality

Dogs know how to have fun, and encoded in their antics is a deep understanding of fair play.

We've all seen it. When dogs play, they look as if they're going crazy, frenetically wrestling, mouthing, biting, chasing, and rolling over, and doing it again and again until they can hardly stand. They use actions like those seen during fighting or mating in random and unpredictable ways. But play sequences don't reflect the more predictable patterns of behavior seen in real fighting and mating. The random nature of play is one marker that dogs are indeed playing with one another. They know it and so do we.

Despite vastly different shapes, sizes, speeds, and strengths, dogs play together with such reckless abandon—flying around, tumbling, tackling, biting, and running, often with unbelievable rapidity—that it's remarkable there's little conflict or injury. (Dog play escalates into real aggression only around 0.5 percent of the time, studies show, although people think it happens far more often, most likely because it's an attention-getter.) How does play remain playful? It's because dogs' minds are very active, and the animals process information rapidly and accurately, even on the run.

By studying dog play we learn a lot about fairness, empathy, and trust. Based on extensive research, we've discovered that dogs exhibit four basic aspects of fair play: Ask first, be honest, follow the rules, and admit when you're wrong. Dogs keep track of what is happening when they play. They can read what other dogs are doing, and they trust that others want to play rather than fight.

When we carefully study the landscape of play we learn that dogs know very well how to tell other dogs "I want to play with you." They use a number of actions: bowing, face pawing, approaching, and rapidly withdrawing, faking left and going right, mouthing, and running right at a potential playmate. Bows also can be used to tell another dog, "I'm sorry I bit you so hard, let's keep playing."

Bows—crouching on forelimbs, perhaps with barking and tail wagging—essentially are contracts to play, and they change the meaning of the actions that follow, such as biting and mounting. They also serve to reinitiate play after a pause.

Dogs and other animals know they must play fair for play to work at all. Bigger, stronger, and more dominant dogs hold back through role-reversing and self-handicapping. Role-reversing occurs when a dominant animal performs an action during play that would not normally occur during real aggression. A dominant or higher-ranking dog would not roll over on its back during fighting but will when playing.

A hot topic in ethology and animal research today is whether nonhuman animals have a theory of mind—that is, do animals know that other individuals have their own thoughts and feelings, ones that may be the same as or different from their own and that they can anticipate and account for?

For dogs to know that another dog wants to play rather than fight or mate, they need to know what the other is thinking and what its intentions are. Each needs to pay close attention to what the other dog is doing, and each uses this information to predict what the other is likely to do next. Evidence is mounting that dogs likely have a theory of mind, and confirmation is coming from research on play.

There's a good deal of mind reading going on during play, and without empathy and trust, play wouldn't happen. Most dogs are moral mutts: When fairness breaks down, so too does play. —Marc Bekoff

Marc Bekoff, Ph.D., is a professor emeritus of ecology and evolutionary biology at the University of Colorado, Boulder.

How Dog Brains Work

Dogs use the same neural pathways we do to get where they can't go.

At play as at other activities, dogs exert some degree of self-control to inhibit impulses that would take them out of the game or otherwise spoil their social relationships. In this they are much like humans in fact, social play is a major way young children learn self-regulation. And while the canine brain is a tenth the size of ours, the effortful control of behavior is accomplished much the same way—in the same part of the brain and through a similar biological mechanism.

We know this because psychologists Gregory Berns and Peter Cook, of Emory University's Canine Cognitive Neuroscience Lab, went where no one had gone before. They painstakingly trained a number of dogs to enter an fMRI scanner of their own accord, tolerate earplugs to block out the unsettling noise, sit absolutely still when necessary, and respond to assorted commands in a fully awake state.

Their studies to understand canine brain function pinpoint the neural pathways activated in a variety of behavioral states. The goal, Berns and Cook report in a recent issue of Current Directions in Psychological Science devoted to dog cognition, is, yes, to learn about the dog brain, but it's also to gain comparative insight into human brain function.

Trained on go/no-go hand signals, dogs were scanned to see what happens in their brain when they have to suppress a predominating response to nose-poke a target in front of them. Inhibiting responses is an executive function carried out by the frontal lobes of the cortex in humans.

The dog brain is about the size of a lemon, and the frontal lobes are very small. In humans, the frontal lobes—seat of abstract thought, planning, decision making, and more—take up the front one-third of our much larger brain. In dogs, they take up only about a tenth of the organ.

The bigger the brain of a species, the more modular it gets. Nevertheless, the researchers found, an analogous part of the brain—a small area of the frontal lobe—comes online during active inhibition. What's more, the level of brain activation correlated with the dogs' behavioral performance on the inhibition task and on other tests of self-control—including a canine version of the famed marshmallow test. The researchers were sure they were picking up a generalized behavioral trait of self-control, a facet of dog temperament.

Much as with people, there are individual differences in canine neural response, and they correlate with dog behavior and temperament. Self-control is often hard. One dog barked all the way through the task of actively inhibiting the nose-poke in the scanner—sound like anyone you know?—but still managed to restrain himself until given the release signal.

At a dizzying pace, neuroscience is providing unprecedented information about mental states. One thing studies show is that dog brains are organized similarly to ours in many ways. According to Berns, similarities in physiological processes suggest similarities in internal subjective experiences. At the very least, they imply that dog experience is richer than many people believe.

For Berns, the research also shows more. The knowledge of brain structure and cognitive function holds the key to understanding what it's like to be a dog. "Where structure-function relationships in an animal's brain are similar to those in our brains," he writes in his new book, What It's Like to Be a Dog, "it is likely that the animal is capable of having a similar subjective experience." Everyone knows what it feels like to exert self-control, he notes. "The brain data suggest that a dog's experience [is] very much the same." —Hara Estroff Marano

Leashed To the Here and Now

Do dogs know that we know that they're thinking of us?

For all the neural sophistication of dogs, science also reveals there are categorical differences in the nature of dog experience.

When we think about dogs' minds, we instinctively fall back on anthropomorphism, the idea that animals have thoughts somewhat like our own, just (in some undefined way) less so. Yet even a casual appraisal of the differences between our two species suggests that this can be no more than a crude approximation. Dogs build their picture of the world through their acute sense of smell we humans are visual creatures first and foremost. Dogs' brains follow the standard carnivore pattern that prioritizes processing sensory information and turning it into precise and rapid action. Ours is dominated by cerebral cortices that give us unparalleled thinking abilities, including a facility for language.

Over a lifetime, we commit thousands of faces to memory dogs must memorize the characteristic odors of hundreds of butts.

We also differ in how we process this information. Not only do our minds continually review our relationships with others, we also try to imagine how those people relate to one another. With dogs, it's more a case of "out of scent, out of mind."

The sensory and cognitive divide between dogs and their masters suggests that dogs' minute-by-minute experience of the world is significantly different from our own. Dogs seem to live almost entirely in the present, neither ruminating on the past nor planning for the future.

The evolutionary legacy of dogs makes it obvious that their social intelligence originated with their wild ancestor, the wolf. Wolves live in well-coordinated packs in which it's not only important to communicate effectively with one another but also crucial to be able to predict the intentions of other members of the pack by reading their body language. Dogs have inherited these basic building blocks, and the process of domestication has modified them to incorporate an almost uncanny ability to understand our human body language, to the point where it's easy—although probably inaccurate—to credit dogs with considerable emotional intelligence.

Our own behavior in social situations is driven by our conviction that those we interact with are capable of thinking about us, and that they know that we know that they are. Dogs' social intelligence seems to be driven by much simpler, though highly effective, processes, whereby they compare what is happening in the here-and-now with what has happened in similar situations in the past. What it does give them is an almost Zen-like detachment from the baggage of expectation and concern for the future that serves them extremely well as man's best friend. —John Bradshaw

John Bradshaw, Ph.D., is the founding director of the Anthrozoology Institute at the University of Bristol, England. His newest book, to be released this fall, is The Animals Among Us: How Pets Make Us Human.

Submit your response to this story to [email protected] If you would like us to consider your letter for publication, please include your name, city, and state. Letters may be edited for length and clarity.

Pick up a copy of Psychology Today on newsstands now or subscribe to read the rest of the latest issue.


Could Robots Create a 'Jobless Future' for Humans?

Another humanlike capability that researchers would like to build into AI is initiative. Machines excel at playing the game Go because humans directed the machines to solve it. They can’t define problems on their own, and defining problems is usually the hard part.

In a forthcoming paper for the journal "Trends in Cognitive Science," Ryota Kanai, a neuroscientist and founder of a Tokyo-based startup Araya discusses how to give machines intrinsic motivation. In a demonstration, he and his colleagues simulated agents driving a car in a virtual landscape that includes a hill too steep for the car to climb unless it gets a running start. If told to climb the hill, the agents figure out how to do so. Until they receive this command, the car sits idle.

Then Kanai’s team endowed these virtual agents with curiosity. They surveyed the landscape, identified the hill as a problem, and figured out how to climb it even without instruction.

“We did not give a goal to the agent,” Kanai says. “The agent just explores the environment to learn what kind of situation it is in by making predictions about the consequence of its own action.”

The trick is to give robots enough intrinsic motivation to make them better problem solvers, and not so much that they quit and walk out of the lab. Machines can prove as stubborn as humans. Joscha Bach, an AI researcher at Harvard, put virtual robots into a “Minecraft”-like world filled with tasty but poisonous mushrooms. He expected them to learn to avoid them. Instead, they stuffed their mouths.

“They discounted future experiences in the same way as people did, so they didn’t care,” Bach says. “These mushrooms were so nice to eat.” He had to instill an innate aversion into the bots. In a sense, they had to be taught values, not just goals.

Paying Attention

In addition to self-awareness and self-motivation, a key function of consciousness is the capacity to focus your attention. Selective attention has been an important area in AI research lately, not least by Google DeepMind, which developed the Go-playing computer.

“Consciousness is an attention filter,” says Stanley Franklin, a computer science professor at the University of Memphis. In a paper published last year in the journal "Biologically Inspired Cognitive Architectures," Franklin and his colleagues reviewed their progress in building an AI system called LIDA that decides what to concentrate on through a competitive process, as suggested by neuroscientist Bernard Baars in the 1980s. The processes watch for interesting stimuli — loud, bright, exotic — and then vie for dominance. The one that prevails determines where the mental spotlight falls and informs a wide range of brain function, including deliberation and movement. The cycle of perception, attention, and action repeats five to 10 times a second.

The first version of LIDA was a job-matching server for the U.S. Navy. It read emails and focused on pertinent information while juggling each job hunter's interests, the availability of jobs, and the requirements of government bureaucracy.

Since then, Franklin’s team has used the system to model animals’ minds, especially behavioral quirks that result from focusing on one thing at a time. For example, LIDA is just as prone as humans are to a curious psychological phenomenon known as “attentional blink.” When something catches your attention, you become oblivious to anything else for about half a second. This cognitive blind spot depends on many factors and LIDA shows humanlike responses to these same factors.

Pentti Haikonen, a Finnish AI researcher, has built a robot named XCR-1 on similar principles. Whereas other researchers make modest claims — create some quality of consciousness — Haikonen argues that his creation is capable of genuine subjective experience and basic emotions.

Related

Innovation Giant Robot Is Action Movies Come To Life

The system learns to make associations much like the neurons in our brains do. If Haikonen shows the robot a green ball and speaks the word “green,” the vision and auditory modules respond and become linked. If Haikonen says “green” again, the auditory module will respond and, through the link, so will the vision module. The robot will proceed as if it heard the word and saw the color, even if it's staring into an empty void.

Conversely, if the robot sees green, the auditory module will respond, even if the word wasn’t uttered. In short, the robot develops a kind of synesthesia.

“If we see a ball, we may say so to ourselves, and at that moment our perception is rather similar to the case when we actually hear that word,” Haikonen says. “The situation in the XCR-1 is the same.”

Things get interesting when the modules clash — if, for example, the vision module sees green while the auditory module hears “blue.” If the auditory module prevails, the system as a whole turns its attention to the word it hears while ignoring the color it sees. The robot has a simple stream of consciousness consisting of the perceptions that dominate it moment by moment: “green,” “ball,” “blue,” and so on. When Haikonen wires the auditory module to a speech engine, the robot will keep a running monolog about everything it sees and feels.

Haikonen also gives vibration a special significance as “pain,” which preempts other sensory inputs and consumes the robot’s attention. In one demonstration, Haikonen taps the robot and it blurts, “Me hurt.”

“Some people get emotionally disturbed by this, for some reason,” Haikonen says. (He and others are unsentimental about the creations. “I’m never like, ‘Poor robot,’” Verschure says.)

A New Species

Building on these early efforts, researchers will develop more lifelike machines. We could see a continuum of conscious systems, just as there is in nature, from amoebas to dogs to chimps to humans and beyond. The gradual progress of this technology is good because it gives us time adjust to the idea that, one day, we won’t be the only advanced beings on the planet.

For a long while, our artificial companions will be vulnerable — more pet than threat. How we treat them will hinge on whether we recognize them as conscious and as capable of suffering.

“The reason that we value non-human animals, to the extent that people do, is that we see, based on our own consciousness, the light of consciousness within them as well,” says Susan Schneider, a philosopher at the University of Connecticut who studies the implications of AI. In fact, she thinks we will deliberately hold back from building conscious machines to avoid the moral dilemmas it poses.

“If you’re building conscious systems and having them work for us, that would be akin to slavery,” Schneider says. By the same token, if we don’t give advanced robots the gift of sentience, it worsens the threat they may eventually pose to humanity because they will see no particular reason to identify with us and value us.

Judging by what we’ve seen so far, conscious machines will inherit our human vulnerabilities. If robots have to anticipate what other robots do, they will treat one another as creatures with agency. Like us, they may start attributing agency to inanimate objects: stuffed animals, carved statues, the wind.

Last year, social psychologists Kurt Gray of the University of North Carolina and the late Daniel Wegner suggested in their book “The Mind Club” that this instinct was the origin of religion. “I would like to see a movie where the robots develop a religion because we have engineered them to have an intentionality prior so that they can be social,” Verschure says. ”But their intentionality prior runs away.”

These machines will vastly exceed our problem-solving ability, but not everything is a solvable problem. The only response they could have to conscious experience is to revel in it, and with their expanded ranges of sensory perception, they will see things people wouldn’t believe.

“I don’t think a future robotic species is going to be heartless and cold, as we sometimes imagine robots to be,” Lipson says. “They’ll probably have music and poetry that we’ll never understand.”


Left or Right Tail Wags Elicit Different Emotional Responses From Dogs

Dogs can tell which way a tail wags, and respond emotionally to its direction.

Not all tail wags are created equal. Although a dog wagging its tail may look friendly to us, to other dogs, there is a wealth of information in such a seemingly simple action.

A new study finds that dogs respond to the direction of a tail wag. Canines that see tails wagging to the right are more relaxed, whereas they become more stressed when they see tails wagging to the left. The responses are a result of the differing roles played by the left and right hemispheres of a dog's brain, according to the research. (Related: "Can Dogs Feel Our Emotions? Yawn Study Suggests Yes.")

The same scientific team previously found that dogs wag their tails to the right when looking at something they want to approach, such as their owner. But they wag their tails to the left when confronted with something they want to back away from, such as another dog with an aggressive posture.

The dogs' directional tail wagging was a result of increased activation of either the left or right side of their brains, said neuroscientist Giorgio Vallortigara of the University of Trento in Italy, who led both studies.

He has characterized other right-versus-left brain differences or biases in how dogs react to sounds, scents, and emotions. "But the issue remained open whether this asymmetry conveyed any meaning to an observing dog," he said.

To find out if other dogs responded to the direction of tail wags, the researchers recruited 43 dogs of various breeds and showed them videos of another dog or a digitized silhouette of a dog with its tail wagging left or right.

The observing dogs were fitted with a vest to measure their heart rate, and their behaviors were filmed and analyzed.

When dogs looked at tails wagging to the left, their heart rate increased and they showed more signs of stress and anxiety. The dogs were more relaxed when they saw tails wagging to the right. The results were published October 31 in the journal Current Biology.

Vallortigara emphasized that just because dogs interpreted tail wagging as stressful or non-stressful, it did not necessarily indicate that the left or right tail wag was intended as a communication signal.

"It's possible that there's no communication going on in the intentional sense," he said. It could just be a byproduct of the activation of one side of a dog's brain over the other side.

Dogs may become more stressed out when seeing a left tail wag because "they're interpreting that the dog they're looking at might have higher arousal, or might be more likely to attack," said Lesley Rogers, an emeritus professor of neuroscience at the University of New England in Armidale, Australia, who was not involved with the study.

Rogers studied right or left biases in animal brains for more than 30 years, and was the first to show that such biases were not unique to humans.

"We know there's this fundamental pattern, that the left hemisphere is used when an animal is in a relaxed state, focused on things, and the right one [is used] when it's an emergency situation, when something novel has happened, and during an attack," she said.

A similar process is at work in human beings, in whom the right hemisphere is used to express intense emotions, Rogers added. (Related: "OCD Dogs, People Have Similar Brains Is Your Dog OCD?")

"Where this paper is a step forward is to show that those side biases are actually read or interpreted by another member of the species," she said. "We have very little if any other evidence of that."

The new study is "a very major contribution" to our understanding of how animals interpret such signals, said Thomas Reimchen, an evolutionary biologist at the University of Victoria in Canada, who also was not involved in the study published October 31.

"It's great work," Reimchen said. It’s one of the first studies to show that animals evaluate the left- or right-sided bias of other individuals and modify their behavior in response, he said.

In a previous study, Reimchen found that dogs in a dog park responded differently to the right and left tail wagging of a life-size robotic dog replica. Far more dogs approached the robot without stopping when the robodog's tail wagged to the left.

The fact that dogs hesitated less when approaching the left-wagging robot appears to contradict the results of Vallortigara's study, in which dogs became more anxious when they saw left-wagging tails. But the two experiments' methods were sufficiently different that it's hard to make comparisons, Reimchen said.

"The current paper did this very elegant analysis of physiological function, whereas we used this proxy of behavior, which was whether [the dogs] stopped or not," he said.

"What is clear is that there is a lot of visual information that dogs use when interacting with each other, and the tail is a very important signal," Reimchen said. The study also provides evidence that "docking," or removing portions of a dog's tail, compromises their ability to communicate, he said. (Read "How to Build a Dog" in National Geographic magazine.)

As a follow-up, Vallortigara said he would be interested in trying to look at the behaviors of freely interacting pairs of dogs.

"What I would like to do is to have two real dogs facing each other, with the possibility to measure very precisely the movement of the tail and other physiological parameters during free encounters," he said. This could help researchers capture the feedback that characterizes interactions between two real dogs.

One could imagine practical uses of the study’s findings, Vallortigara said. For example, veterinarians could approach dogs from a preferred side, and the findings could also be used for teaching and training dogs.

"It's possible that we could produce dummies tailored to [elicit] right or left [brain responses]," he said. For example, one could build a dummy dog that would wag its tail to the left or right, producing aggressive or less aggressive attitudes depending on what a trainer wants to teach a dog to do, he added.

The University of New England's Rogers would like to see this laboratory experiment repeated in a more natural environment and extended to other species. "It opens up a whole field of research,” she said. Researchers could now focus on finding out how right or left biases in different species are interpreted by other members of the species, Rogers said.

Researchers could also look more carefully at side biases in other interactions in the wild, such as those between predators and prey, said the University of Victoria's Reimchen. "I'm not going to be surprised if we find all sorts of really interesting processes that nobody has ever seen before," he said.


Boston Dynamics’ robots are preparing to leave the lab — is the world ready?

Not many robotics companies can boast legions of fans online, but not many robotics companies make robots quite like Boston Dynamics.

Each time the firm shares new footage of its machines, they cause a sensation. Whether it’s a pack of robot dogs towing a truck or a human-like bot leaping nimbly up a set of boxes, Boston Dynamics’ bots are uniquely thrilling.

They’re also something of a Rorschach test for our feelings about the future, with viewers either basking in the high-tech splendor or bemoaning the coming robo-apocalypse. And when a parody video circulated last month showing a CGI “Bosstown Dynamics” robot turning on its creators, many mistook it for the real thing — a testament to how far the company has pushed what seems technologically possible.

Boston Dynamics’ Spot is leaving the laboratory

But for all its engineering prowess, Boston Dynamics now faces its biggest challenge yet: turning its stable of robots into an actual business. After decades of kicking machines in parking lots, the company is set to launch its first ever commercial bot later this year: the quadrupedal Spot. It’s a crucial test for a company that’s spent decades pursuing long-sighted R&D. And more importantly, the success — or failure — of Spot will tell us a lot about our own robot future.

Are we ready for machines to walk among us?

A Spot robot with a camera array at Amazon’s re:MARS conference. Photo by James Vincent / The Verge

Talk to anyone in the robotics industry and they’ll sum up their sector with a three-word phrase, honed by years of trial and error: robots are hard.

The sector is notoriously unforgiving, with startups and established companies often collapsing with little warning. Just last year, three robotics companies folded in the space of a few months. In response, industry veteran James Kuffner of the Toyota Research Institute summarized the challenges of building robots on Facebook. “It requires significant funding, committed leadership, highly skilled staff, resources, and infrastructure, and an excellent product and market strategy,” wrote Kuffner. “Not to mention flawless execution.”

Boston Dynamics has developed robots for the military like BigDog and LS3 (above), but they were rejected for being too loud. Credit: DVIDS

Boston Dynamics’ robots seem flawless, but that’s partly because they’ve never had to operate in the hurly-burly of commercial environments. Since its founding in 1992, the company has relied on deep-pocketed patrons like the Department of Defense and Alphabet. Its earlier life was shaped by government contracts, and in 2013 it was bought by Google’s parent company, as part of an abortive attempt for the search giant to enter the robotics industry.

Boston Dynamics CEO Marc Raibert tells The Verge that those years of contracting and research were necessary to bring the company to its current stage of development.

“We’ve been an R&D company for a long time, working on pushing the envelope [and] making robots that try to live up to people’s idea of what a robot should be,” says Raibert. “And it’s natural … that as we do that R&D it makes robots more and more useful, and it makes it obvious to us that, ‘Oh, this thing could be used and commercialized.’”

Pentagon contracts gave Boston Dynamics the time and space needed to develop cutting-edge legged robots like the pack-mule BigDog (eventually rejected by the military for being too noisy), but they’ve not yet led to a salable robot. Instead, the company has consistently impressed backers by giving machines a trait that’s eluded them for decades: mobility.

True mobility is something beyond the ken of most machines, explains Hod Lipson, a professor of engineering at Columbia University. “We think that playing chess is a big deal but just walking around, coordinating hundreds of muscles is an incredible accomplishment,” Lipson tells The Verge. “Robots, in the most part, are still very clumsy. The smallest physical obstacle bewilders them.”

A typical industrial robot in use today: static and unintelligent. Photo by Julian Stratenschulte via Getty Images

Consequently, most machines used in factories and warehouses today are huge, static, and unintelligent things: designed to stay in one place and perform repetitive tasks. The robots of tomorrow, by comparison, will be agile and dynamic capable of working alongside humans and reacting to changing environments and behavior. Incidentally, that’s why Boston Dynamics is so fond of pushing and shoving its robots in videos. There’s nothing like a swift kick to the ribs to prove that a robot can cope with physical uncertainty.

Boston Dynamics has been “trying to break this invisible boundary” of mobility for decades, says Lipson. But as a result, salable applications “almost seem like an afterthought.”

It’s an assessment with which many in the industry agree. “They were on the government dole then on the Google dole,” Erik Nieves, founder of automation company Plus One Robotics, tells The Verge. “They had no real mission: just be awesome! But they’re already awesome.”

The trigger for commercialization seems to have been the company’s acquisition in 2017 by Japanese tech giant SoftBank. Raibert says commercialization was always the end goal, but that access to SoftBank’s significant resources have allowed the company to kick its production of robots into a higher gear. “That’s one of the ingredients,” he says.

Softbank is an “unabashedly commercial company” and will want to “get a return on their investment,” Nieves says. Notably, the Japanese firm’s other bets in robotics — which include Aldebaran, makers of the Pepper robot, and Fetch Robotics, which does warehouse automation — have been selling robots in commercial settings for years.

As well as making Spot into a salable robot, the company has also bought logistics startup Kinema Systems to pave the way into warehouse automation. Boston Dynamics is already selling robots it acquired with this purchase, giving it a foothold in the new industry.

As part of this new focus, Raibert has become a familiar figure on the tech conference scene. Appearing onstage in his trademark Hawaiian shirts, he regularly wows audiences with tech demos, directing Spot to jump, trot, and dance, like a robot ringmaster.

Raibert’s big promise is that the Spot will become the “Android of robotics” — a customizable platform that other companies can build on to meet specific needs. “We specifically designed it as a platform so it can be customized for lots of different users,” says Raibert. “In very short order, we’ll have a number of different attachments that can be used to customize the robots.” (Though the actual launch date and price have still yet to be confirmed.)

So far, these payloads include robotic arms capable of grabbing and manipulating objects sensor arrays including thermal and 360-degree cameras as well as radio units, so Spot can become a mobile relay for communications. This flexibility means Boston Dynamics plans to sell and lease Spot for a wide array of tasks: everything from surveying construction sites and industrial buildings to package delivery and security applications.

Is there anyone Boston Dynamics wouldn’t sell Spot to? To law enforcement or the military, for example? Raibert doesn’t rule it out. “We’re enthusiastic about responsible use of the robot,” he says. “I think you’re asking a tough question because there’s so many edges on it.”

There’s certainly a market for these applications. Companies like Knightscope already offer robot security guards that patrol spaces like parking lots and malls. And while these bots are cheaper than humans (Knightscope’s cost roughly $7 an hour), they’re limited by their wheeled designs. Curbs are a problem and stairs an impossibility. A Knightscope robot demonstrated these flaws memorably in 2017 when it nose-dived into a fountain. It’s the sort of image that looks perfect in a pitch deck, with Spot on the next slide, trotting happily over obstacles.

Spot’s appeal lies in its modularity. Customers can add on different modifications, like the grabbing arm above. Image: Boston Dynamics

At this point, Boston Dynamics’ robots look less like a single product and more like a vision of the future. Legged robots have been difficult to build for decades, but advances in a range of connected fields — including sensors, motors, control software, and machine vision — are making them viable for the first time.

Raibert says the difference between Spot and earlier legged robots is night and day. “It’s a tribute to what we’ve learned over the years in sensing the terrain, in balancing the robot, in controlling it,” he says. To reach this level the company has leaned on what Raibert calls “low-level AI.” That means artificial intelligence control systems that are responsible for keeping the machines upright and balanced in all situations. Actually telling the robot where to go and what to do is left to humans, who control Spot using a modified gaming tablet.

Legged designs have many natural advantages, says Lipson. They’re easy to balance and hard to knock over. They work in a wide variety of environments and are supremely adaptable. “This is why nature has so many legged machines,” says Lipson. “It’s a very versatile platform … I believe this will be the primary platform for future robotics.”

Boston Dynamics isn’t the only firm that’s sees this potential. In the years the company has spent honing its designs, numerous competitors have sprung up with similar products. These include Laikago, a quadrupedal robot designed by Chinese firm Unitree Robotics the Vision and Wraith series made by the Philadelphia-based Ghost Robotics and ANYmal, created by ANYbotics, a company spun out of ETH Zurich University in Switzerland.

ANYmal is particularly similar to Spot, able to work indoors and outdoors with a range of add-ons. Co-founder Péter Fankhauser tells The Verge that the company has already started selling its robots (though he won’t say for how much), and recently demoed one of its machines surveying an offshore energy platform in the North Sea.

ANYmal has been used to survey an offshore energy platform. Credit: ANYbotics

It’s the perfect showcase for technology, says Fankhauser. Not only can a legged machine navigate the tight corridors and stairs of this sort of industrial environment, but sending a robot to remote locations means one less human stuck in the middle of nowhere. “These are dangerous jobs, remote jobs, and hence expensive jobs,” he says. “The business case is very, very clear.”

In Fankhauser’s view, it’s good news that more companies are launching legged robots. It creates a more varied market, he says, and gives customers greater confidence. “Companies would be afraid to buy these things if there was only one supplier.”

Jeff Burnstein, president of the Association for Advancing Automation, agrees that it’s a “good sign” that more companies than just Boston Dynamics are involved in this sector. But he tells The Verge that it’s hard to predict whether the bots will really take off given that the hardware is so new. “We just haven’t had many of these products on the market before.”

While the case for Spot seems largely optimistic, it might prove trickier for Boston Dynamics to get its robots running in a more banal, but potentially more lucrative, setting: the warehouse.

After the company bought Kinema Systems, it started selling the startup’s Pick robot, a static industrial arm equipped with pneumatic suckers that uses deep learning to see the world around it. (The arm itself is made by Japanese robotics giant Yaskawa.) This year, it showed how its wheeled robot Handle, which uses a counterweight to balance a single picking arm, might be incorporated into the same setting depalletizing goods and stacking boxes.

Selling Pick seems straightforward: Kinema Systems proved the tech’s utility before the company was acquired. But incorporating Handle into this same environment is much more ambitious — more so than Spot. Boston Dynamics is presenting the robot as something closer to a direct stand-in for humans: a machine that’s able to navigate a warehouse as easily as a person. That means it could potentially be slotted right into a company’s workflow, rather than asking a firm to reorganize their factories or warehouses.

Handle the robot has pneumatic suckers for picking up boxes, but it would struggle with other materials. Photo: Boston Dynamics

For Nieves, whose company also automates logistics and warehouses, it’s a tough job. He says Handle’s design is pure Boston Dynamics: graceful and agile, “a beautiful piece of engineering.” But it will be more expensive than other options, both human and mechanical, he warns, and it’s hamstrung by its design in a number of ways.

For a start, it’s untethered, meaning it would rely on battery power. That would necessitate companies buying multiple units in each location, so some could charge while others worked. Its pneumatic gripper is also only able to grab certain goods, making it tricky to use for a lot of common warehouse tasks, like loading or unloading a truck.

“There is no way robots can do that today,” says Nieves. “You are playing Tetris in 3D in real time and half of the stuff you’re having to load isn’t even rigid.” He says: “I’m convinced that Handle as you see it today will not see the inside of a true warehouse … They’ve got work to do yet in how they bring this to market.”

Raibert says the technology is still being developed, and that the company has no timeline for when Handle might go on sale. But he says the opportunity for warehouse automation is “massive.”

“You go look at logistic activities around the world and it’s essentially unautomated. People are distracted by thinking that Amazon has their warehouse entirely automated, but they just have one or two tasks automated,” he says. Having truly mobile, dynamic robots like Handle would change that. “There’s a lot of low-hanging fruit there,” says Raibert.

Boston Dynamics’ bipedal Atlas robot — designed for R&D only.

Like the commenters underneath Boston Dynamic’s videos, it’s hard not to see the company as a litmus test for what the future of automation will look like. And that’s not at all unfair — the future of automation really is uncertain.

Robots are becoming more common in everyday settings, but experts worry they’re not doing a very good job. In a paper this year, economists Daron Acemoglu and Pascual Restrepo — two of the most respected researchers in the field — warned of a phenomenon they dubbed “so-so” automation, when the technology used to replace people doesn’t offer any actual benefit to the economy. Automated factories don’t get faster or more productive they just swap human labor for their machine equivalent, with the benefits of this exchange accrued by those in charge.

We’re already seeing this dynamic in play in parts of the workforce. In Walmart, for example, where robots are taking on mundane tasks like scanning shelves, employees say the machines don’t make their jobs easier. In fact, they make them harder, because of the extra work needed to manage the bots. In Amazon warehouses, robots are taking on more jobs, but as Raibert says, it’s still only partial and in the interim, humans are treated more like machines.

Acemoglu and Restrepo warn that if this trend continues and new job opportunities are not created, life will only get harder for the working people — something we’re already seeing with the rise of precarious jobs and wage stagnation.

It’s this depressing, deflating context that makes Boston Dynamics’ robot so exciting. Rather than making the same old trundling bots that fall over and wait for a human to pick them up, the company seems to be leap-frogging “so-so” automation into something more technologically advanced. At least, that’s what the company’s videos show. Now it’s Boston Dynamics’ time to prove that its robots are ready to leave the lab and head out into the world.

“I think robots are going to affect peoples’ lives in a good way. I think it’s going to increase productivity, I think it’s going to release people from dull, dirty, and dangerous [jobs],” says Raibert. “I would hate to see the great opportunities in a technology like this missed because of fear of what the downsides may be.”


Industrial/Consumer Robotics Applications

The use of AI-enabled robotics is burgeoning in the industrial and consumer sectors, especially the former, where it's used to do everything from quickly ship packages to explore oceans for untapped oil deposits. Below are four publicly traded companies that develop robots or implement them in some aspect of their business practices.

iRobot

IRobot

Location: Bedford, Massachusetts

Stock Symbol: IRBT

How it’s using robots: IRobot has developed a series of AI-enabled robot vacuums, mops and pool cleaners. The iRobot Roomba vacuums create a map of the house they are cleaning and track the patterns for most efficient routes and spots that need the most cleaning attention.

Industry Impact: The 25 year-old robotics company developed a pool cleaner dubbed “Mirra," which is programmed to clean the surface of the water as well as a pool's walls and floor. IRobot’s cleaner uses AI to track its routes and identify areas where bacteria buildup can occur.

Boston Dynamics

Boston Dynamics

Location: Waltham, Massachusetts

Stock Symbol: Softbank SFTBY

How it’s using robots: Boston Dynamics creates human- and animal-like robots that do everything from carry heavy loads in factories to perform reconnaissance for the U.S. military. Originally part of MIT and acquired by Softbank, the company has a stable of nine robots that all perform different tasks.

Industry Impact: Boston Dynamics boasts an impressive lineup of robots. The "Wildcat," an animal-like robot, can run at a speed of 32 kilometers per hour. The company’s “LS3” is a load-carrying robot that's designed to follow the U.S. Marines and carry up to 181 kilograms. “Atlas” is a humanoid robot that can run, jump and carry.

Oceaneering International

Oceaneering International

Location: Houston, Texas

Stock Symbol: OII

How it’s using robots: Oceaneering International’s fleet of Remotely Operated Vehicles (ROVs) assist oil and gas companies with underwater operations. The company’s eight different robots do everything from lift massive amounts of weight to help with underwater rig inspections to help fix any problems with underwater pipelines.

Industry Impact: The National Oceanic and Atmospheric Administration (NOAA) has tasked Oceaneering International with updating navigational charts and researching marine habitats and fisheries off the coast of Florida. Oceaneering will use its fleet of aquatic robots to research about 650 nautical miles by December 2018.

Amazon Robotics

Amazon Robotics

Location: North Reading, Massachusetts

Stock Symbol: AMZN

How it’s using robots: Amazon Robotics creates and implements autonomous robots, control software and robotic language and vision sensing for its fulfillment center operations. Originally called Kiva Systems, the company uses its robots as automated storage and retrieval mechanisms throughout its vast warehouses.

Industry Impact: The implementation of robotics has reportedly cut Amazon’s costs by up to 20%. The floor-roaming robots can locate and carry entire shelves or individual products to conveyor belts, which saves time and resources. Amazon Robotics has reportedly developed retrieval systems for GAP, Walgreens and Staples.


Why give AI rights in the first place?

We already attribute moral accountability to robots and project awareness onto them when they look super-realistic . The more intelligent and life-like our machines appear to be, the more we want to believe they’re just like us—even if they’re not. Not yet.

But once our machines acquire a base set of human-like capacities, it will be incumbent upon us to look upon them as social equals, and not just pieces of property. The challenge will be in deciding which cognitive thresholds, or traits, qualify an entity for moral consideration, and by consequence, social rights. Philosophers and ethicists have had literally thousands of years to ponder this very question.

“The three most important thresholds in ethics are the capacity to experience pain, self-awareness, and the capacity to be a responsible moral actor,” sociologist and futurist James Hughes, the Executive Director of the Institute for Ethics and Emerging Technologies , told Gizmodo.

“In humans, if we are lucky, these traits develop sequentially. But in machine intelligence it may be possible to have a good citizen that is not self-aware or a self-aware robot that doesn’t experience pleasure and pain,” Hughes said. “We’ll need to find out if that is so.”

It’s important to point out that intelligence is not the same as sentience (the ability to perceive or feel things), consciousness (awareness of one’s body and environment), and self-awareness (recognition of that consciousness). A machine or algorithm could be as smart—if not smarter—than humans, but still lack these important capacities. Calculators, Siri, and stock trading algorithms are intelligent, but they aren’t aware of themselves, they’re incapable of feeling emotions, and they can’t experience sensations of any kind, such as the color red or the taste of popcorn.

Hughes believes that self-awareness comes with some minimal citizenship rights, such as the right to not be owned, and to have its interests to life, liberty, and growth respected. With both self-awareness and moral capacity (i.e. knowing right from wrong, at least according to the moral standards of the day) should come full adult human citizenship rights, argues Hughes, such as the rights to make contracts, own property, vote, and so on.

“Our Enlightenment values oblige us to look to these truly important rights-bearing characteristics, regardless of species, and set aside pre-Enlightenment restrictions on rights-bearing to only humans or Europeans or men,” he said. Obviously, our civilization hasn’t attained the lofty pro-social goals, and the expansion of rights continues to be a work in progress.


In The Minds of Dogs

By Stanley Coren PhD., DSc, FRSC, Rosalind Arden Ph.D., Marc Bekoff Ph.D., Hara Estroff Marano and John Bradshaw Ph.D. published September 5, 2017 - last reviewed on September 13, 2017

Conversing with Canines

Think Cat in the Hat for talking to dogs.

If you want to start an argument among psychologists, behavioral biologists, and next-door-neighbor dog owners, just ask the question: Do dogs understand and use language? The argument tends to focus on whether dogs understand the words and expressions that humans use. A related concern is whether dogs use their various barks, growls, whines, and whimpers, combined with tail wags, body postures, and ear positions, to communicate with people as well as with one another.

Some scientists argue that dogs are more attuned to the emotional aspects of our word sounds than their actual meaning, and that their own signals are just visible expressions of their emotional state. Accordingly, any information such signals communicate about a dog and its intentions is just a byproduct, and those signals provide no more evidence of language ability than does our capacity to understand that other humans are happy because they're smiling or are angry because they're scowling.

With the right tools, it's possible to explore what dogs are capable of cognitively. The study of animal cognition in general, and dog cognition in particular, is now a growth industry.

In the early 1990s, it dawned on me that one of the ways to learn whether dogs actually have language was to deploy tests already developed for assessing human children—and simply modify them for use with dogs. I borrowed the MacArthur Communicative Development Inventory, which assesses language ability in very young children in terms not only of words but also gestures. When someone points a finger and we know that they are trying to communicate the location of something of interest, that's a linguistic gesture. An individual demonstrates an understanding of such an elementary message by looking or moving in that direction.

My data led to the conclusion that the average dog can learn to recognize about 165 words and gestures. "Super dogs"—those in the top 20 percent of canine intelligence—can learn 250 or more.

Other scientists soon tested my predictions. One study showed that a border collie named Rico is able to recognize more than 200 words. Perhaps the most linguistically advanced dog so far is another border collie, named Chaser. She is owned by a retired psychologist, John Pilley, and her vocabulary is around 1,000 words. What's more, Chaser understands some of the basics of grammar involved in simple sentence construction and seems to infer intention.

Evidence from testing dogs suggests that language is not an ability possessed only by humans. The knowledge that dogs have basic language skills offers further insight into the canine mind. The test scores I recorded allowed me to assign each dog a mental age representing the animal's cognitive ability. Dogs have a mental ability roughly equivalent to a human toddler age 2 to 2-and-a-half. Super dogs like Chaser have minds that might be similar to that of a 3-year-old child.

Tests of canine language ability offer a new way of looking at dogs' mental skills. If a problem can't be solved by a 2- to 3-year-old child, then it is not likely that a dog can solve it either. And if a training technique won't work for a toddler, then it likely won't work for a dog. —Stanley Coren

Stanley Coren, PH.D., is a professor emeritus of psychology at the University of British Columbia, Canada, whose research has focused on human cognition as well as dog intelligence. His latest book is Gods, Ghosts, and Black Dogs: The Fascinating Folklore and Mythology of Dogs.

Taking the IQ Test

One of the hardest tricks is coming up with a way to measure dog intelligence.

Humans have language and are mostly willing to follow the sonorous imperative,"You may now turn over your test papers." Still, it took a while to develop reliable IQ exam questions. In species less amenable to turning over—and more given to eating—said papers, the task of creating reliable test items is significantly harder.

Last year, Mark J. Adams and I published a study of 68 border collies to whom a set of six tests had been administered. We wanted to know whether dogs' cognitive abilities "hang together" the way they do in people. Four of the tests were related (from a human perspective). They comprised various barriers around which each dog had to navigate to find food. A fifth test ascertained the dogs' capacity to discriminate between quantities (to choose the bigger or smaller snack). A final test assessed their ability to understand and respond to a human gesture, specifically, a pointing arm directed at one of two inverted beakers, each covering a food reward.

We found a tendency for dogs who were better at one task to be better at others, and dogs who were faster were also more accurate. Three correlated elements—detour time, choice time, and choice accuracy—provided evidence that in dogs, as with people, cognitive abilities are associated with each other at the trait level. And as with humans, there appears to be an underlying factor exerting general influence on cognitive processes—a canine general IQ, or g-factor. The bottom line: Some dogs are smarter than others. This may sound obvious, but it has to be established empirically.

How does training fit in? In our sample, all the dogs were working farm dogs, so they had received similar training exposure. But training does not make all dogs alike. As with humans, brighter dogs learn new tricks faster. Famous dogs like Betsy, who could pick up a new word after two exposures, have had countless hours of training lavished on them, but it seems likely they were all smart dogs to begin with.

As well as being smart, a highly trainable dog must be biddable. Personality and test performance are not easy to decouple with dogs because, like Bartleby the Scrivener, a dog who would prefer not to simply does not. Such recalcitrance is somewhat awkward for psychometricians working with dogs (and with nonhuman animals more generally). It would be nice to be able to cleave cleanly between intelligence and other aspects of canine behavior, such as motivation and obedience. Yet we cannot hang over their heads the threat of tanking on an SAT. We have to go with food bribes instead.

What are the properties of an ideal test item for assessing canine intelligence? All dogs should be able to do it to some extent it must reflect mental ability, not motor skills or training it should have a graded outcome that is not simply pass or fail. But we should crack measuring dog intelligence because dogs are a great model for learning how cognitive abilities are associated with constellations of traits such as health, dementia risk, lifespan, and biological fitness. Among humans, for example, intelligence predicts health. Since dogs' outcomes are not subject to influence from the big hitters of epidemiology—smoking, alcohol, and drug abuse—they are terrific animals to partner with. In addition, their propensity to acquire naturally some of the same diseases that we suffer from (including dementia) makes learning about dog cognition a research priority. —Rosalind Arden

Rosalind Arden, Ph.D., is a research associate at the Centre for Philosophy of Natural and Social Science at the London School of Economics & Political Science.

Mutt Morality

Dogs know how to have fun, and encoded in their antics is a deep understanding of fair play.

We've all seen it. When dogs play, they look as if they're going crazy, frenetically wrestling, mouthing, biting, chasing, and rolling over, and doing it again and again until they can hardly stand. They use actions like those seen during fighting or mating in random and unpredictable ways. But play sequences don't reflect the more predictable patterns of behavior seen in real fighting and mating. The random nature of play is one marker that dogs are indeed playing with one another. They know it and so do we.

Despite vastly different shapes, sizes, speeds, and strengths, dogs play together with such reckless abandon—flying around, tumbling, tackling, biting, and running, often with unbelievable rapidity—that it's remarkable there's little conflict or injury. (Dog play escalates into real aggression only around 0.5 percent of the time, studies show, although people think it happens far more often, most likely because it's an attention-getter.) How does play remain playful? It's because dogs' minds are very active, and the animals process information rapidly and accurately, even on the run.

By studying dog play we learn a lot about fairness, empathy, and trust. Based on extensive research, we've discovered that dogs exhibit four basic aspects of fair play: Ask first, be honest, follow the rules, and admit when you're wrong. Dogs keep track of what is happening when they play. They can read what other dogs are doing, and they trust that others want to play rather than fight.

When we carefully study the landscape of play we learn that dogs know very well how to tell other dogs "I want to play with you." They use a number of actions: bowing, face pawing, approaching, and rapidly withdrawing, faking left and going right, mouthing, and running right at a potential playmate. Bows also can be used to tell another dog, "I'm sorry I bit you so hard, let's keep playing."

Bows—crouching on forelimbs, perhaps with barking and tail wagging—essentially are contracts to play, and they change the meaning of the actions that follow, such as biting and mounting. They also serve to reinitiate play after a pause.

Dogs and other animals know they must play fair for play to work at all. Bigger, stronger, and more dominant dogs hold back through role-reversing and self-handicapping. Role-reversing occurs when a dominant animal performs an action during play that would not normally occur during real aggression. A dominant or higher-ranking dog would not roll over on its back during fighting but will when playing.

A hot topic in ethology and animal research today is whether nonhuman animals have a theory of mind—that is, do animals know that other individuals have their own thoughts and feelings, ones that may be the same as or different from their own and that they can anticipate and account for?

For dogs to know that another dog wants to play rather than fight or mate, they need to know what the other is thinking and what its intentions are. Each needs to pay close attention to what the other dog is doing, and each uses this information to predict what the other is likely to do next. Evidence is mounting that dogs likely have a theory of mind, and confirmation is coming from research on play.

There's a good deal of mind reading going on during play, and without empathy and trust, play wouldn't happen. Most dogs are moral mutts: When fairness breaks down, so too does play. —Marc Bekoff

Marc Bekoff, Ph.D., is a professor emeritus of ecology and evolutionary biology at the University of Colorado, Boulder.

How Dog Brains Work

Dogs use the same neural pathways we do to get where they can't go.

At play as at other activities, dogs exert some degree of self-control to inhibit impulses that would take them out of the game or otherwise spoil their social relationships. In this they are much like humans in fact, social play is a major way young children learn self-regulation. And while the canine brain is a tenth the size of ours, the effortful control of behavior is accomplished much the same way—in the same part of the brain and through a similar biological mechanism.

We know this because psychologists Gregory Berns and Peter Cook, of Emory University's Canine Cognitive Neuroscience Lab, went where no one had gone before. They painstakingly trained a number of dogs to enter an fMRI scanner of their own accord, tolerate earplugs to block out the unsettling noise, sit absolutely still when necessary, and respond to assorted commands in a fully awake state.

Their studies to understand canine brain function pinpoint the neural pathways activated in a variety of behavioral states. The goal, Berns and Cook report in a recent issue of Current Directions in Psychological Science devoted to dog cognition, is, yes, to learn about the dog brain, but it's also to gain comparative insight into human brain function.

Trained on go/no-go hand signals, dogs were scanned to see what happens in their brain when they have to suppress a predominating response to nose-poke a target in front of them. Inhibiting responses is an executive function carried out by the frontal lobes of the cortex in humans.

The dog brain is about the size of a lemon, and the frontal lobes are very small. In humans, the frontal lobes—seat of abstract thought, planning, decision making, and more—take up the front one-third of our much larger brain. In dogs, they take up only about a tenth of the organ.

The bigger the brain of a species, the more modular it gets. Nevertheless, the researchers found, an analogous part of the brain—a small area of the frontal lobe—comes online during active inhibition. What's more, the level of brain activation correlated with the dogs' behavioral performance on the inhibition task and on other tests of self-control—including a canine version of the famed marshmallow test. The researchers were sure they were picking up a generalized behavioral trait of self-control, a facet of dog temperament.

Much as with people, there are individual differences in canine neural response, and they correlate with dog behavior and temperament. Self-control is often hard. One dog barked all the way through the task of actively inhibiting the nose-poke in the scanner—sound like anyone you know?—but still managed to restrain himself until given the release signal.

At a dizzying pace, neuroscience is providing unprecedented information about mental states. One thing studies show is that dog brains are organized similarly to ours in many ways. According to Berns, similarities in physiological processes suggest similarities in internal subjective experiences. At the very least, they imply that dog experience is richer than many people believe.

For Berns, the research also shows more. The knowledge of brain structure and cognitive function holds the key to understanding what it's like to be a dog. "Where structure-function relationships in an animal's brain are similar to those in our brains," he writes in his new book, What It's Like to Be a Dog, "it is likely that the animal is capable of having a similar subjective experience." Everyone knows what it feels like to exert self-control, he notes. "The brain data suggest that a dog's experience [is] very much the same." —Hara Estroff Marano

Leashed To the Here and Now

Do dogs know that we know that they're thinking of us?

For all the neural sophistication of dogs, science also reveals there are categorical differences in the nature of dog experience.

When we think about dogs' minds, we instinctively fall back on anthropomorphism, the idea that animals have thoughts somewhat like our own, just (in some undefined way) less so. Yet even a casual appraisal of the differences between our two species suggests that this can be no more than a crude approximation. Dogs build their picture of the world through their acute sense of smell we humans are visual creatures first and foremost. Dogs' brains follow the standard carnivore pattern that prioritizes processing sensory information and turning it into precise and rapid action. Ours is dominated by cerebral cortices that give us unparalleled thinking abilities, including a facility for language.

Over a lifetime, we commit thousands of faces to memory dogs must memorize the characteristic odors of hundreds of butts.

We also differ in how we process this information. Not only do our minds continually review our relationships with others, we also try to imagine how those people relate to one another. With dogs, it's more a case of "out of scent, out of mind."

The sensory and cognitive divide between dogs and their masters suggests that dogs' minute-by-minute experience of the world is significantly different from our own. Dogs seem to live almost entirely in the present, neither ruminating on the past nor planning for the future.

The evolutionary legacy of dogs makes it obvious that their social intelligence originated with their wild ancestor, the wolf. Wolves live in well-coordinated packs in which it's not only important to communicate effectively with one another but also crucial to be able to predict the intentions of other members of the pack by reading their body language. Dogs have inherited these basic building blocks, and the process of domestication has modified them to incorporate an almost uncanny ability to understand our human body language, to the point where it's easy—although probably inaccurate—to credit dogs with considerable emotional intelligence.

Our own behavior in social situations is driven by our conviction that those we interact with are capable of thinking about us, and that they know that we know that they are. Dogs' social intelligence seems to be driven by much simpler, though highly effective, processes, whereby they compare what is happening in the here-and-now with what has happened in similar situations in the past. What it does give them is an almost Zen-like detachment from the baggage of expectation and concern for the future that serves them extremely well as man's best friend. —John Bradshaw

John Bradshaw, Ph.D., is the founding director of the Anthrozoology Institute at the University of Bristol, England. His newest book, to be released this fall, is The Animals Among Us: How Pets Make Us Human.

Submit your response to this story to [email protected] If you would like us to consider your letter for publication, please include your name, city, and state. Letters may be edited for length and clarity.

Pick up a copy of Psychology Today on newsstands now or subscribe to read the rest of the latest issue.


Watch the video: Συναισθήματα Διαχείριση Συναισθημάτων Ψυχολογία Ψυχική ενδυνάμωση Θεραπεία Αυτοβελτιωση Αυτογνωσια (August 2022).