Artificial Intelligence in Skin Cancer Recognition: Is It Possible?

Artificial Intelligence has been all the hype within the Information and Technology industry. It is one of the ultimate goals of true automation in which a computer learns to mimic the independent thinking a human being has. Researchers have high hopes for this technology, especially with the countless applications it can demonstrate once it proves its effectiveness. The industry expects its contributions to law enforcement, financial services, customer service, and even healthcare. In this article, we will look into the possibilities of using artificial intelligence in the life-threatening disease of melanoma. As doctors recommend early detection of this skin cancer, we will find out the practicality of artificial intelligence in this scenario.

Artificial Intelligence 101

In simple terms, Artificial Intelligence (AI) aims to imitate the cognitive capacity of the human mind. Humans design AIs to perform problem-solving and decision-making tasks on inputted data producing a discernible output. 

Under AI are the more sophisticated Machine Learning and Deep Learning (DL). The difference between the two is that DL usually involves less input from the programmer because it can better process the features of the data. DL processes more efficiently by examining data from input to output or output to input in order to precisely adjust the appropriate output. 

Our discussion will mainly focus on the DL algorithm called the Convolutional Neural Networks (CNN). This is the algorithm built explicitly for image processing, classification, and pattern-detection. 

Why AI on Skin Cancer?

As previously mentioned, the specialization of CNN in image recognition is an appropriate ability for a disease that manifests itself on the skin. This would allow for capturing an image of the cancer for examination through the lens of an AI. 

Specifically, we will discuss the type of skin cancer called melanoma. It is the most common cancer in the United States while also being the deadliest skin cancer causing the most skin cancer-related deaths. However, this skin cancer’s five-year survival rate increases from 22.5% (Stage IV) to 98.4% (Stage II and below) if diagnosed and treated early.

Doctors can diagnose melanoma through naked eye examinations. Dermoscopy could improve these examinations, but a meta-analysis of studies from 1987 to 2008 observed that the sensitivity levels of dermoscopy rarely reached above 80%. The next best option would be a skin biopsy examining the patient’s skin tissues through a time-consuming and labor-intensive histopathological analysis.

Encountering all these problems in the diagnosis and lethality of late-stage melanoma, the time is nigh for an alternative early screening method that can prove useful through intensive research. The development of CNN as an assistance for melanoma, or even skin cancer in general, is definitely a welcomed technology.

The Melanoma Recognition Revolution Through AI

The difficulty in detecting malignant lesions such as melanoma is its high degree of similarity with other benign counterparts. For example, a nevus or a mole is a benign tumor that appears as pigmented spots in the body. Visually, it appears very similar to melanoma. This is especially difficult to classify with naked-eye examinations. And in cases where early detection can cause loss of lives, doctors allow no room for confusion.

In a systematic review of different AI for the use of melanoma classification, researchers concluded that all 19 CNN-based AI demonstrated better or at least equivalent diagnosis performance compared to clinicians. 11 of those studies only used dermoscopic images, with 6 using clinical images. Only 2 of the studies resorted to histopathological images. 

Reading all these results, one can be hasty to conclude that perhaps we should already use these AIs in the hospital setting. However, it is not as easy as that. Remember that these 19 studies are likely published only due to positive results. Reviewers can bring up the risk of publication bias. An argument stating that most articles are only published because of its positive results, and we have no grasp of the consequences.

Another aspect that the researchers of this systematic review remind us of is regarding the methodology of the study. The studies reviewed existed in an artificial setting wherein the images inputted through the AI are still lacking in terms of the patient population, leading to a lack of melanoma subtypes encountered. A personalized health care plan for a specific patient still beats the general approach that AI can provide.

Although still lacking, more and more researchers believe in the inclusion of AI in the future of healthcare. They are still trying to invent alternatives for those who cannot afford to visit their dermatologists regularly.

Melanoma AI Recognition In Our Pockets

In a very recent study published just this 19 April 2022, researchers tested 11 DL AIs with a dataset of dermoscopic images from HAM10000. Out of the 11, they adopted a CNN-based AI with 92.25% accuracy and 93.59% sensitivity for the development of a mobile application. The CNN-based AI called DenseNet169 can classify a skin lesion as whether it is benign or malignant. 

The mobile application worked as simple as taking a photo using the device’s camera, cropping the photo, and waiting for the result. It can also provide personalized skin information based on the person’s skin complexity and environment. The information ranges from allowed sun exposition time, current ultraviolet radiation degree, the user’s phototype, and the degree of the used sunscreen.

Although presenting a promising technology, the aforementioned mobile application still encounters a few problems:

  1. The AI used for the mobile application requires so much computational power that only some powerful mobile phones can use it, defeating the purpose of accessibility. The researchers suggest a cloud computer for this functionality. 
  2. The assessment of the mobile application depends heavily on the quality of the image, further restricting accessibility. The researchers suggest macro lenses and mobile phone stabilizers for this. They also provide some AI-based image enhancement for this purpose.
  3. The researchers require a proper way of taking a photo for better assessment. The developers can fix this with a simple tutorial before use.

All these points are standard problems faced with advanced technology, and the problem in specificity of image quality and the way you take the image can produce incorrect results. One can say that this can lead to confusion among the application users, or it can alert them of the possibilities of a forming disease. In one way or another, it is still debatable whether the use of these technologies is already ripe for public use.

Click here for our blog Disclaimer.