Classification tasks are some of the most widely known machine learning applications. Computers can classify images, sounds, and patterns with high accuracy, given enough training data, time, and computational resources. After training, computers can perform identification tasks faster than humans and often with less error. In contrast, humans are well equipped for anomaly identification and can adapt to new information quickly and effectively. Thus we introduce DialectDecoder, a tool that uses human intelligence and machine learning to classify different dialects of White-crowned Sparrow songs, relying on both the human and the computer for different parts of the classification process. DialectDecoder is an example of humans and computers working together to detect and classify anomalies. After preprocessing the data and training the classifiers with the built-in tools, each new song is fed to the network where the computer classifies the song as a specific dialect or gives the song conflicting labels which, with the song, are sent to the expert human for classification. The human expert can label all songs with conflicting labels and append them to the training set, which provides the computer with updated information to retrain on. We performed three different tests that illustrate the applicability of human-machine teaming and demonstrate the ability of DialectDecoder, paired with an expert human, to classify different dialects of White-crowned Sparrow songs. Then, we discuss extensions of DialectDecoder and its applicability to other human-machine teaming tasks that leverage the distinct strengths contributed by each half of the partnership to improve overall performance.
Support the authors with ResearchCoin
Support the authors with ResearchCoin