The current development of deep learning and artificial intelligence allows a new approach to trying to understand the whale song. Deep learning models could help with identification of humpback sounds and unsupervised learning models could help to classify the song “units” in perhaps different ways. This project aims to investigate humpback songs from different years and regions, to identify humpback whale sounds in recordings, to classify these calls or units, and eventually to see if we can find individually distinct calls or units (the same way we do in Manatee Chat). This information will be used to train deep learning models that will be able to identify humpback whale sounds in recordings, classify calls, and search for underlying structure, hopefully allowing a better understanding of the song’s function. If it is possible to identify individually distinct calls, then this information could be used to track migrating whales acoustically, helping to get valuable information about humpback whale populations and their health.