Can we speak with animals using artificial intelligence?

Animal vocalizations have long piqued people’s curiosity and been the subject of research. Depending on the predator, many primates produce distinct warning noises. For instance, some songbirds may alter the elements of their sounds to convey multiple meanings, and dolphins communicate using diverse whistles. Since no animal communication meets all the criteria, most experts prefer not to call it a language.

Today’s decoding has largely relied on careful observation until recently. The use of machine learning to manage the enormous volumes of data that can be gathered by modern animal borne-sensors has increased recently, according to The Guardian.

Aza Raskin, on the other hand, makes the decision to use machine learning to promote the conservation of other living species while fostering stronger human relationships with them in an effort to understand non-human communication. She is the Earth Species Project’s co-founder and president (ESP). The dolphin handler gestures ‘together’ and ‘create’ with her hands.

Before emerging, turning around, and lifting their tails to perform a new act of their own invention, the two trained dolphins exchanged sounds. The originator insisted that this does not prove the existence of a language but rather serves as evidence for how much easier things would have been if they had had access to a language communication tool.

ESP asserts that its approach is distinct from others since it focuses on deciphering the communication of all species rather than just one. Raskin quotes the founder as saying, ‘We’re species agnostic.’ According to The Guardian, ‘the technologies we develop… can work across all of nature, from worms to whales.’

Machine learning is now being used to understand animal communication, according to associate professor Elodiw Briefer, whose research focuses on vocal communication in animals and mammals. In collaboration with others, Briefer created an algorithm that can tell whether a pig is happy or sad based on its grunts.

Rats’ ultrasonic sounds are analyzed by DeepSqueak to see if they’re stressed.

Another effort to translate sperm whale communication using machine learning is called CETI (Cetacean Translation).

Automatically determining the functional significance of vocalization is the goal of another research. It is now being worked on in the ocean science-focused lab of Professor Ari Friedlaender at the University of California, Santa Cruz.

The program’s analysis of how wild marine mammals interact underwater, despite their difficulty in being directly observed, is one of its main objectives. Small electronic blogging devices that record the animals’ position, type of movements, and ability to view objects through a camera are attached to them.

Leave a Reply

Your email address will not be published.