Yazar "Varol, Rahmetullah" seçeneğine göre listele
Listeleniyor 1 - 3 / 3
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe Acousto-holographic reconstruction of whole-cell stiffness maps(Nature Research, 2022) Varol, Rahmetullah; Karavelioğlu, Zeynep; Ömeroğlu, Sevde; Aydemir, Gizem; Karadağ, Aslıhan; Meco, Hanife E.; Demirçalı, Ali A.; Yılmaz, Abdurrahim; Koçal, Gizem C.; Gençoğlan, Gülsüm; Oruç, Muhammed E.; Esmer, Gökhan B.; Başbınar, Yasemin; Özdemir, Şahin K.; Üvet, HüseyinAccurate assessment of cell stiffness distribution is essential due to the critical role of cell mechanobiology in regulation of vital cellular processes like proliferation, adhesion, migration, and motility. Stiffness provides critical information in understanding onset and progress of various diseases, including metastasis and differentiation of cancer. Atomic force microscopy and optical trapping set the gold standard in stiffness measurements. However, their widespread use has been hampered with long processing times, unreliable contact point determination, physical damage to cells, and unsuitability for multiple cell analysis. Here, we demonstrate a simple, fast, label-free, and high-resolution technique using acoustic stimulation and holographic imaging to reconstruct stiffness maps of single cells. We used this acousto-holographic method to determine stiffness maps of HCT116 and CTC-mimicking HCT116 cells and differentiate between them. Our system would enable widespread use of whole-cell stiffness measurements in clinical and research settings for cancer studies, disease modeling, drug testing, and diagnostics. © 2022, The Author(s).Öğe Deep convolutional neural networks for onychomycosis detection using microscopic images with KOH examination(Wiley, 2022) Yılmaz, Abdurrahim; Göktay, Fatih; Varol, Rahmetullah; Gençoğlan, Gülsüm; Üvet, HüseyinBackground: The diagnosis of superficial fungal infections is still mostly based on direct microscopic examination with Potassium Hydroxide solution. However, this method can be time consuming and its diagnostic accuracy rates vary widely depending on the clinician's experience. Objectives: This study presents a deep neural network structure that enables the rapid solutions for these problems and can perform automatic fungi detection in grayscale images without dyes. Methods: 160 microscopic full field photographs containing the fungal element, obtained from patients with onychomycosis, and 297 microscopic full field photographs containing dissolved keratin obtained from normal nails were collected. Smaller patches containing fungi (n=1835) and keratin (n=5238) were extracted from these full field images. In order to detect fungus and keratin, VGG16 and InceptionV3 models were developed by the use of these patches. The diagnostic performance of models was compared with 16 dermatologists by using 200 test patches. Results: For the VGG16 model, the InceptionV3 model and 16 dermatologists; mean accuracy rates were 88.10%±0.8%, 88.78%±0.35%, and 74.53%±8.57%, respectively; mean sensitivity rates were 75.04%±2.73%, 74.93%±4.52%, and 74.81%±19.51%, respectively; and mean specificity rates were 92.67%±1.17%, 93.78%±1.74%, and 74.25%±18.03%, respectively. The models were statistically superior to dermatologists according to rates of accuracy and specificity but not to sensitivity (p < 0.0001, p < 0.005, and p > 0.05, respectively). Area under curve values of the VGG16 and InceptionV3 models were 0.9339 and 0.9292, respectively. Conclusion: Our research demonstrates that it is possible to build an automated system capable of detecting fungi present in microscopic images employing the proposed deep learning models. It has great potential for fungal detection applications based on AI.Öğe MobileSkin: Classification of skin lesion images acquired using mobile phone-attached hand-held dermoscopes(MDPI, 2022) Yılmaz, Abdurrahim; Gençoğlan, Gülsüm; Varol, Rahmetullah; Demirçalı, Ali Anıl; Keshavarz, Meysam; Uvet, HüseyinDermoscopy is the visual examination of the skin under a polarized or non-polarized light source. By using dermoscopic equipment, many lesion patterns that are invisible under visible light can be clearly distinguished. Thus, more accurate decisions can be made regarding the treatment of skin lesions. The use of images collected from a dermoscope has both increased the performance of human examiners and allowed the development of deep learning models. The availability of large-scale dermoscopic datasets has allowed the development of deep learning models that can classify skin lesions with high accuracy. However, most dermoscopic datasets contain images that were collected from digital dermoscopic devices, as these devices are frequently used for clinical examination. However, dermatologists also often use non-digital hand-held (optomechanical) dermoscopes. This study presents a dataset consisting of dermoscopic images taken using a mobile phone-attached hand-held dermoscope. Four deep learning models based on the MobileNetV1, MobileNetV2, NASNetMobile, and Xception architectures have been developed to classify eight different lesion types using this dataset. The number of images in the dataset was increased with different data augmentation methods. The models were initialized with weights that were pre-trained on the ImageNet dataset, and then they were further fine-tuned using the presented dataset. The most successful models on the unseen test data, MobileNetV2 and Xception, had performances of 89.18% and 89.64%. The results were evaluated with the 5-fold cross-validation method and compared. Our method allows for automated examination of dermoscopic images taken with mobile phone-attached hand-held dermoscopes.