By: Sophia Fiat
From phones that can identify cancer to self-driving cars, and autonomous grocery stores, today’s technology and deep learning applications are pretty badass – and they are becoming deep-rooted in every part of our lives – for better or for worse.
The applications of deep learning technology are just starting to emerge but clearly there are already some incredible uses for it. Deep Learning sounds more intimidating than it actually is. Basically, it’s the ability to teach computers to learn complicated concepts by associating them with simpler ones. We now have fast enough computers and large enough labeled data sets that we can use algorithms to do this. Lost yet? In my three-part blog, I provide a few, more practical examples to help you catch on.
Part 1: A Cell Phone that Can Diagnose Cancer?
I’m pretty sure I am not the only one to self-diagnose my illness with the help of the Internet. Last year, I (luckily) correctly determined that I needed to up my magnesium and calcium intake thanks to a few Google searches on my symptoms. But what happens when we have bigger health concerns, or worse, an undiagnosed disease? Well, cue the cell phone app that’s changing the game – forever.
According to a January 2017 paper published in the journal Nature, Stanford researchers recently trained a computer to identify images of skin cancer moles and lesions as accurately as 21 board-certified dermatologists. Using a convolutional neural net (a type of computer software that excels in recognizing different concepts) researchers taught the software to identify patterns by telling it whether the images uploaded were of skin cancer or without skin cancer. The computer was presented with 129,450 images representing more than 2,000 skin diseases, and it was able to classify skin lesions based on this data. According to CNN, “this new research suggests, a simple cell phone app may help patients diagnose skin cancer – the most common of all cancers in the United States – for themselves.”
Will this type of medical diagnostic, artificial intelligence impact doctors and healthcare costs? Absolutely! The CNN report goes on to state, “with 6.3 billion smartphone subscriptions estimated to be in use by 2021, the researchers noted, their new system in the form of an app, could provide low-cost universal access to diagnostic care.”
It’s important to remember that large sets of coded data are necessary to teach a computer to identify such diseases, thus making it a lengthy process. As stated by Carl Vondrick, a Ph.D. candidate at MIT, “though humans can learn to recognize patterns from very little data, machines require thousands or even billions of examples.” However, researchers point out that this technology may be able to detect details in digital photos that have been overlooked by the human eye.
While this technology seems promising and is on track to help billions of people, I don’t think it’s anywhere close to completely replacing a board-certified doctor. And not to throw shade, but my generation is already consumed by technology – especially their precious smartphones. I still find value and comfort in human interactions. If I were worried about cancer, I would seek a human doctor because I would want emotional support during a nerve-wracking time. (BTW…we’re seriously going to need to specify human doctors versus app doctors pretty soon!) On the other hand, I can appreciate the benefit of this technology, as it can make cancer screenings more available to others who may not otherwise have access to proper medical care. The key in this technology’s success will be identifying where humans and artificial intelligence can work together for the common good.