detect coronavirus

AI Can Detect Coronavirus by Listening to Coughs

As number coronavirus pandemic is increasing every day, it is getting hard to cope with a huge wave of infections. With asymptomatic cases being a big concern with the Covid-19, there may be a new method to know if someone has Coronavirus or not.

Significantly, researchers have confirmed an AI (artificial intelligence) tool that can detect if there are any indications in a people’s cough that could point to a Coronavirus disease, before asking medical help and additional testing.

According to MIT Researchers, asymptomatic people may differ from healthy people in the way that they cough. While differences are not decipherable to the human ear, Artificial Intelligence can detect these. Researchers are working nonstop to make this tool accessible to customers as an app. The Food and Drug Administration would approve the app.

According to an article from IEEE Journal of Engineering in Medicine and Biology, the researchers state that induced or forced cough recordings via a laptop, smartphone or a PC can help Artificial Intelligent recognize any changes. If we believe to them, the AI tool has correctly identified 98.5% of coughs from people who had coronavirus infection. The AI tool also has a 100% efficiency rate for coughs from asymptomatic people who tested positive for the disease. The device has been trained on what researchers say are tens of thousands of cough recording samples.

According to Brian Subirana, a research scientist in MIT’s Auto-ID Laboratory, the effective implementation of this group diagnostic device could reduce the spread of the epidemic. Especially, if everyone uses it before going to a restaurant, a classroom, or a factory.

Customers can use the app every day

Significantly, an app would enable a customer to log in daily and record the sound of their cough on their phone. Then, they will find out whether they have an infection, which would mean the necessity for additional testing.

Remarkably, the model was first trained on a general machine-learning algorithm – ResNet50. It can distinguish and separate sounds with various vocal cord strength. It worked with up to 10000 hours of speech to pick out particular words. Additionally, the second level of training was on the neural network that classifies emotional states that often show up in speech.

The exact neural network also recognises the sentiments given by patients who have Alzheimer’s. The third neural network consisted of a database of choking recordings to define the different lung and respiratory responses. The final step is an algorithm that distinguishes between weak and robust coughing. The latter being a sign of muscle weakness.

Sending
User Review
0 (0 votes)

RELATED POSTS

Leave a Reply