Detection And Classification of the Medicinal Plants for Various Life Threatening Disease Using Recurrent Neural Network
Main Article Content
Abstract
Many plant species across the world contains bioactive chemicals which can be used for many life threatening diseases. Many scientists usually conduct many studies and experiment in their paramedical laboratories using plant species. Especially many plant species which is utilized for life threatening disease like cancer has been found across the dense forest and oceans. However on basis of those studies, acquiring of those plant species becomes complex and leads to extraction of incorrect plant species. On advancement of the Internet of Things and artificial intelligence, it becomes feasible to acquire those plant species accurately on employing those techniques for detection and classification task in dense forest and oceans. Traditionally many machine learning and deep learning architecture has been employed to achieve above specified function. Despite of several advantages, those approaches still lags in performance aspects. In order to enhance performance of detection and classification, a deep learning architecture is proposed in this article. Recurrent Neural Network is designed and implemented to detect and classify the medical plants acquired in form of images. Recurrent Neural Network is highly capable of the processing images containing complex structures as it can produce good recognition accuracy. Initially image preprocessing step composed of contrast enhancement is performed to enhance image pixel quality and normalize the complex structure in the image. Preprocessed image is applied to recurrent neural network composed of recurrent layer to identify complex relationship of the image pixel as it is represented in form feature map. Further dense layer in introduced with softmax function to process the feature map to classify and predict the medical plant species. Experimental results of the model are obtained using benchmark Mendeley dataset. On experimental and performance analysis of the model, it is proved that proposed model produces 98.5% recognition accuracy compared to conventional approaches.