A dog’s brain scan suggests that dogs see more actions than things

Humans enjoy providing audio feedback on the behavior of cats, dogs, lizards, or other pets. Social media proves it. In our house, every pet has a specific sound, from the tone of Reggie the Ball Python (also called Royal Python) to the sarcastic street comment of Tigra the Cat, who started life under our hut. Funny and fitting as the British accent of Python is, Reggie doesn’t paraphrase a dead parrot drawing while eating a rat that’s not just resting. And he doesn’t want a Tegra cheeseburger. We may put our own perceptions of our pets’ minds on them, but we do nothing but anthropomorphize.

I want Free daily news update From the pet food industry.

In the pet food industry, though, the true mental states of dogs and cats matter. For example, the development of a new product depends on deciphering the mental motives of animals. During feeding experiments, understanding what is going on in a dog’s head can only be explained by observing behaviors or analyzing body fluids and feces. Although similar techniques are used in taste tests in humans and pets, the dog cannot answer a questionnaire like its primate counterparts. Much of what causes a dog or cat to choose one food over another can only be inferred by researchers.

While Dr. Doolittle’s dream remains elusive, advances in brain scanning and analysis have opened a window into how canine brains reconstruct what they see. Researchers at Emory University have found evidence that we are likely to use more verbs when excessively dubbing our dog’s behavior. For pet food professionals, getting into a dog’s head can provide insight into how vision and other perceptions affect a dog’s food preference.

Brain scans reveal how dogs see the world

Quoted from a press release:

Dogs may be more attuned to the actions of or to whomever does the action.

The researchers recorded the fMRI neuronal data of two awake, unrestricted dogs as they watched videos in three 30-minute sessions, for a total of 90 minutes. Then they used a machine learning algorithm to analyze the patterns in the neural data.

“We’ve shown that we can monitor activity in a dog’s brain while they are watching a video and, to at least a limited degree, reconstruct what they are looking at,” said Gregory Burns, professor of psychology at Emory.

The project was inspired by recent advances in machine learning and fMRI to decode visual stimuli from the human brain, providing new insights into the nature of cognition. Other than humans, this technique has been applied to only a few other species, including some primates.

“While our work is based on only two dogs, it provides proof of concept that these methods work on canines,” said study first author Erin Phillips. Phillips conducted the research while he was a researcher at the Canine Cognitive Neuroscience Laboratory in Burns. “I hope this paper will help pave the way for other researchers to apply these methods to dogs, as well as to other species, so that we can gain more data and greater insights into how the brains of different animals work.”

The Journal of Visual Experiments published the results of the research.

Burns and his colleagues have pioneered training techniques to get dogs to walk in an fMRI scanner and remain completely still and unrestricted while their neural activity is measured. A decade ago, his team published the first fMRI brain images of a fully awake, unleashed dog. That opened the door to what Burns calls the “dog project” – a series of experiments that explore the mind of the oldest domesticated species.

Over the years, his lab has published research on how a dog’s brain processes vision, words, smells, and rewards such as receiving praise or food.

Meanwhile, the technology behind computer algorithms for machine learning continued to improve. The technology has allowed scientists to decipher some patterns of human brain activity. The technology “reads minds” by detecting within patterns of brain data about different objects or actions an individual sees while watching a video.

“I started wondering, ‘Can we apply similar techniques to dogs?'” Burns recalls. ”

The first challenge was to create video content that the dog would find interesting enough to watch for a long time. The Emory research team installed a video recorder on the gimbal and selfie stick that allowed them to shoot still shots from a dog’s perspective, at waist height to a human or slightly below.

They used the device to create a half-hour video of scenes related to the lives of most dogs. Activities included petting dogs by people and receiving treatment from people. Dogs were also shown sniffing, playing, eating, or walking on a leash. Activity scenes show cars, bikes or a motorcycle passing by on the road; Cat walking in a house. a deer crossing the road; people sitting people hugging or kissing; People who project a rubber bone or a ball to the camera; And people eat.

The video data was divided by timestamps into different classifiers, including object-based classifiers (eg dog, car, human, and cat) and movement-based classifiers (eg sniffing, playing or eating).

Only two of the dogs trained for the fMRI experiments had the focus and temperament to lie very still and watch a 30-minute video uninterrupted, including three 90-minute sessions. These two “superstar” canines were Daisy, a mixed breed that may be part of the Boston Terrier, and Bobo, a mixed breed that may be part Boxer.

“They didn’t need treatment,” says Phillips, who monitored the animals during fMRI sessions and watched their eyes track the video. “It was fun because it’s serious science, and a lot of time and effort has been put into it, but it’s about these dogs watching videos of other dogs and humans behaving in a ridiculous way.”

Two people also had the same experiment, watching the same 30-minute video in three separate sessions, while they were lying in an MRI machine.

Brain data can be mapped to video classifiers using timestamps.

A machine learning algorithm, a neural network known as Ivis, was applied to the data. A neural network is a way to do machine learning by having a computer analyze training examples. In this case, the neural network was trained to classify the content of brain data.

The results of the two human subjects found that the model developed using the neural network showed 99% accuracy in mapping brain data to both object- and action-based classifiers.

In the case of decoding video content from dogs, the model does not work with object classifiers. The accuracy was 75% to 88%, however, in deciphering the working classifications of dogs.

The results point to significant differences in how the brains of humans and dogs work.

“We humans are goal oriented,” Burns says. “There are 10 times as many nouns in the English language because we have a special obsession with naming things. Dogs seem to be less interested in who or what they see and more interested in the action itself.”

Burns notes that dogs and humans also have significant differences in their visual systems. Dogs only see shades of blue and yellow but have a slightly higher density of vision receptors designed to detect movement.

“It makes sense that dogs’ brains are highly attuned to actions first and foremost,” he says. “Animals must pay close attention to things going on in their environment to avoid eating them or to keep an eye on animals they might want to hunt. Work and movement are essential.”

For Philips, understanding how different animals perceive the world is important to her current field research on how predator reintroduction into Mozambique affects ecosystems. “Historically, there hasn’t been much overlap in computer science and the environment,” she says. “But machine learning is a growing field that is beginning to find wider applications, including in ecology.”

Additional authors of the paper include Daniel Dilks, Emory associate professor of psychology, and Kirsten Gillette, who worked on the project as an undergraduate in neuroscience and behavioral biology at Emory. Gillette has since graduated and is now in a post-baccalaureate program at the University of North Carolina.

Daisy is owned by Rebecca Beasley and Bhubo is owned by Ashwin Sakhardande. The human trials in the study were supported by a grant from the National Eye Institute.

Leave a Reply

%d bloggers like this: