Abstract:Predicting hospital length of stay (LoS) stands as a critical factor in shaping public health strategies. This data serves as a cornerstone for governments to discern trends, patterns, and avenues for enhancing healthcare delivery. In this study, we introduce a robust hybrid deep learning model, a combination of Multi-layer Convolutional (CNNs) deep learning, Gated Recurrent Units (GRU), and Dense neural networks, that outperforms 11 conventional and state-of-the-art Machine Learning (ML) and Deep Learning (DL) methodologies in accurately forecasting inpatient hospital stay duration. Our investigation delves into the implementation of this hybrid model, scrutinising variables like geographic indicators tied to caregiving institutions, demographic markers encompassing patient ethnicity, race, and age, as well as medical attributes such as the CCS diagnosis code, APR DRG code, illness severity metrics, and hospital stay duration. Statistical evaluations reveal the pinnacle LoS accuracy achieved by our proposed model (CNN-GRU-DNN), which averages at 89% across a 10-fold cross-validation test, surpassing LSTM, BiLSTM, GRU, and Convolutional Neural Networks (CNNs) by 19%, 18.2%, 18.6%, and 7%, respectively. Accurate LoS predictions not only empower hospitals to optimise resource allocation and curb expenses associated with prolonged stays but also pave the way for novel strategies in hospital stay management. This avenue holds promise for catalysing advancements in healthcare research and innovation, inspiring a new era of precision-driven healthcare practices.
Abstract:Diagnosing lung inflammation, particularly pneumonia, is of paramount importance for effectively treating and managing the disease. Pneumonia is a common respiratory infection caused by bacteria, viruses, or fungi and can indiscriminately affect people of all ages. As highlighted by the World Health Organization (WHO), this prevalent disease tragically accounts for a substantial 15% of global mortality in children under five years of age. This article presents a comparative study of the Inception-ResNet deep learning model's performance in diagnosing pneumonia from chest radiographs. The study leverages Mendeleys chest X-ray images dataset, which contains 5856 2D images, including both Viral and Bacterial Pneumonia X-ray images. The Inception-ResNet model is compared with seven other state-of-the-art convolutional neural networks (CNNs), and the experimental results demonstrate the Inception-ResNet model's superiority in extracting essential features and saving computation runtime. Furthermore, we examine the impact of transfer learning with fine-tuning in improving the performance of deep convolutional models. This study provides valuable insights into using deep learning models for pneumonia diagnosis and highlights the potential of the Inception-ResNet model in this field. In classification accuracy, Inception-ResNet-V2 showed superior performance compared to other models, including ResNet152V2, MobileNet-V3 (Large and Small), EfficientNetV2 (Large and Small), InceptionV3, and NASNet-Mobile, with substantial margins. It outperformed them by 2.6%, 6.5%, 7.1%, 13%, 16.1%, 3.9%, and 1.6%, respectively, demonstrating its significant advantage in accurate classification.
Abstract:Co-pyrolysis of biomass feedstocks with polymeric wastes is a promising strategy for improving the quantity and quality parameters of the resulting liquid fuel. Numerous experimental measurements are typically conducted to find the optimal operating conditions. However, performing co-pyrolysis experiments is highly challenging due to the need for costly and lengthy procedures. Machine learning (ML) provides capabilities to cope with such issues by leveraging on existing data. This work aims to introduce an evolutionary ML approach to quantify the (by)products of the biomass-polymer co-pyrolysis process. A comprehensive dataset covering various biomass-polymer mixtures under a broad range of process conditions is compiled from the qualified literature. The database was subjected to statistical analysis and mechanistic discussion. The input features are constructed using an innovative approach to reflect the physics of the process. The constructed features are subjected to principal component analysis to reduce their dimensionality. The obtained scores are introduced into six ML models. Gaussian process regression model tuned by particle swarm optimization algorithm presents better prediction performance (R2 > 0.9, MAE < 0.03, and RMSE < 0.06) than other developed models. The multi-objective particle swarm optimization algorithm successfully finds optimal independent parameters.
Abstract:This chapter proposes using the Moth Flame Optimization (MFO) algorithm for finetuning a Deep Neural Network to recognize different underwater sonar datasets. Same as other models evolved by metaheuristic algorithms, premature convergence, trapping in local minima, and failure to converge in a reasonable time are three defects MFO confronts in solving problems with high-dimension search space. Spiral flying is the key component of the MFO as it determines how the moths adjust their positions in relation to flames; thereby, the shape of spiral motions can regulate the transition behavior between the exploration and exploitation phases. Therefore, this chapter investigates the efficiency of seven spiral motions with different curvatures and slopes in the performance of the MFO, especially for underwater target classification tasks. To assess the performance of the customized model, in addition to benchmark Sejnowski & Gorman's dataset, two experimental sonar datasets, i.e., the passive sonar and active datasets, are exploited. The results of MFO and its modifications are compared to four novel nature-inspired algorithms, including Heap-Based Optimizer (HBO), Chimp Optimization Algorithm (ChOA), Ant Lion Optimization (ALO), Stochastic Fractals Search (SFS), as well as the classic Particle Swarm Optimization (PSO). The results confirm that the customized MFO shows better performance than the other state-of-the-art models so that the classification rates are increased 1.5979, 0.9985, and 2.0879 for Sejnowski & Gorman, passive, and active datasets, respectively. The results also approve that time complexity is not significantly increased by using different spiral motions.
Abstract:A definitive diagnosis of a brain tumour is essential for enhancing treatment success and patient survival. However, it is difficult to manually evaluate multiple magnetic resonance imaging (MRI) images generated in a clinic. Therefore, more precise computer-based tumour detection methods are required. In recent years, many efforts have investigated classical machine learning methods to automate this process. Deep learning techniques have recently sparked interest as a means of diagnosing brain tumours more accurately and robustly. The goal of this study, therefore, is to employ brain MRI images to distinguish between healthy and unhealthy patients (including tumour tissues). As a result, an enhanced convolutional neural network is developed in this paper for accurate brain image classification. The enhanced convolutional neural network structure is composed of components for feature extraction and optimal classification. Nonlinear L\'evy Chaotic Moth Flame Optimizer (NLCMFO) optimizes hyperparameters for training convolutional neural network layers. Using the BRATS 2015 data set and brain image datasets from Harvard Medical School, the proposed model is assessed and compared with various optimization techniques. The optimized CNN model outperforms other models from the literature by providing 97.4% accuracy, 96.0% sensitivity, 98.6% specificity, 98.4% precision, and 96.6% F1-score, (the mean of the weighted harmonic value of CNN precision and recall).
Abstract:Shuffled Frog Leaping Algorithm (SFLA) is one of the most widespread algorithms. It was developed by Eusuff and Lansey in 2006. SFLA is a population-based metaheuristic algorithm that combines the benefits of memetics with particle swarm optimization. It has been used in various areas, especially in engineering problems due to its implementation easiness and limited variables. Many improvements have been made to the algorithm to alleviate its drawbacks, whether they were achieved through modifications or hybridizations with other well-known algorithms. This paper reviews the most relevant works on this algorithm. An overview of the SFLA is first conducted, followed by the algorithm's most recent modifications and hybridizations. Next, recent applications of the algorithm are discussed. Then, an operational framework of SLFA and its variants is proposed to analyze their uses on different cohorts of applications. Finally, future improvements to the algorithm are suggested. The main incentive to conduct this survey to provide useful information about the SFLA to researchers interested in working on the algorithm's enhancement or application
Abstract:In this work, a new multiobjective optimization algorithm called multiobjective learner performance-based behavior algorithm is proposed. The proposed algorithm is based on the process of transferring students from high school to college. The proposed technique produces a set of non-dominated solutions. To judge the ability and efficacy of the proposed multiobjective algorithm, it is evaluated against a group of benchmarks and five real-world engineering optimization problems. Additionally, to evaluate the proposed technique quantitatively, several most widely used metrics are applied. Moreover, the results are confirmed statistically. The proposed work is then compared with three multiobjective algorithms, which are MOWCA, NSGA-II, and MODA. Similar to the proposed technique, the other algorithms in the literature were run against the benchmarks, and the real-world engineering problems utilized in the paper. The algorithms are compared with each other employing descriptive, tabular, and graphical demonstrations. The results proved the ability of the proposed work in providing a set of non-dominated solutions, and that the algorithm outperformed the other participated algorithms in most of the cases.
Abstract:In this paper, a novel swarm intelligent algorithm is proposed called ant nesting algorithm (ANA). The algorithm is inspired by Leptothorax ants and mimics the behavior of ants searching for positions to deposit grains while building a new nest. Although the algorithm is inspired by the swarming behavior of ants, it does not have any algorithmic similarity with the ant colony optimization (ACO) algorithm. It is worth mentioning that ANA is considered a continuous algorithm that updates the search agent position by adding the rate of change (e.g., step or velocity). ANA computes the rate of change differently as it uses previous, current solutions, fitness values during the optimization process to generate weights by utilizing the Pythagorean theorem. These weights drive the search agents during the exploration and exploitation phases. The ANA algorithm is benchmarked on 26 well-known test functions, and the results are verified by a comparative study with genetic algorithm (GA), particle swarm optimization (PSO), dragonfly algorithm (DA), five modified versions of PSO, whale optimization algorithm (WOA), salp swarm algorithm (SSA), and fitness dependent optimizer (FDO). ANA outperformances these prominent metaheuristic algorithms on several test cases and provides quite competitive results. Finally, the algorithm is employed for optimizing two well-known real-world engineering problems: antenna array design and frequency-modulated synthesis. The results on the engineering case studies demonstrate the proposed algorithm's capability in optimizing real-world problems.
Abstract:In this paper, a novel deterioration and damage identification procedure (DIP) is presented and applied to building models. The challenge associated with applications on these types of structures is related to the strong correlation of responses, which gets further complicated when coping with real ambient vibrations with high levels of noise. Thus, a DIP is designed utilizing low-cost ambient vibrations to analyze the acceleration responses using the Stockwell transform (ST) to generate spectrograms. Subsequently, the ST outputs become the input of two series of Convolutional Neural Networks (CNNs) established for identifying deterioration and damage to the building models. To the best of our knowledge, this is the first time that both damage and deterioration are evaluated on building models through a combination of ST and CNN with high accuracy.
Abstract:In this article, an original data-driven approach is proposed to detect both linear and nonlinear damage in structures using output-only responses. The method deploys variational mode decomposition (VMD) and a generalised autoregressive conditional heteroscedasticity (GARCH) model for signal processing and feature extraction. To this end, VMD decomposes the response signals into intrinsic mode functions (IMFs). Afterwards, the GARCH model is utilised to represent the statistics of IMFs. The model coefficients of IMFs construct the primary feature vector. Kernel-based principal component analysis (PCA) and linear discriminant analysis (LDA) are utilised to reduce the redundancy of the primary features by mapping them to the new feature space. The informative features are then fed separately into three supervised classifiers, namely support vector machine (SVM), k-nearest neighbour (kNN), and fine tree. The performance of the proposed method is evaluated on two experimentally scaled models in terms of linear and nonlinear damage assessment. Kurtosis and ARCH tests proved the compatibility of the GARCH model.