Browsing by Author "Orhan, Semih"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Master Thesis Localization of certain animal species in images via training neural networks with image patches(Izmir Institute of Technology, 2017-12) Orhan, Semih; Baştanlar, YalınObject detection is one of the most important tasks for computer vision systems. Varying object size, varying view angle, illumination conditions, occlusion etc. effect the success rate. In recent years, convolutional neural networks (CNNs) have shown great performance in different problems of computer vision including object detection and localization. In this work, we propose a novel training approach for CNNs to localize some animal species whose bodies have distinctive pattern, such as speckles of leopards, black-white lines of zebras, etc. To learn characteristic patterns, small patches are taken from different body parts of animals and they are used to train models. To find object location, in a test image, all locations are visited in a sliding window fashion. Crops are fed to CNN, then classification scores of all patches are recorded. To illustrate object location, heat map is generated by the classification scores of the patches. Afterwards, heat maps are converted to binary images and end up with bounding box estimates of objects. The localization performance of our Patch-based training is compared with Faster R-CNN – a state-of-the-art CNN-based object detection and localization algorithm. While evaluating the performances, in addition to the standard precision-recall metric, we use area-precision and area-recall which represent the potential of Patch-based Model better. Experiment results show that the proposed training method has better performance than Faster R-CNN for most of the evaluated classes. We also showed that Patch-based Model can be used with Faster R-CNN to increase its localization performance.Doctoral Thesis Semantic segmentation of panoramic images and panoramic image based outdoor visual localization(01. Izmir Institute of Technology, 2022-10) Orhan, Semih; Baştanlar, Yalın360-degree views are captured by full omnidirectional cameras and generally represented with panoramic images. Unfortunately, these images heavily suffer from the spherical distortion at the poles of the sphere. In previous studies of Convolutional Neural Networks (CNNs), several methods have been proposed (e.g. equirectangular convolution) to alleviate spherical distortion. Getting inspired from these previous efforts, we developed an equirectangular version of the UNet model. We evaluated the semantic segmentation performance of the UNet model and its equirectangular version on an outdoor panoramic dataset. Experimental results showed that the equirectangular version of UNet performed better than UNet. In addition, we released the pixel-level annotated dataset, which is one of the first semantic segmentation datasets of outdoor panoramic images. In visual localization, localizing perspective query images in a panoramic image dataset can alleviate the non-overlapping view problem between cameras. Generally, perspective query images are localized in a panoramic image database with generating its virtual 4 or 8 gnomonic views, which is deforming sphere into cube faces. Doing so can simplify the searching problem to perspective to perspective search, but still there might be a non-overlapping view problem between query and gnomonic database images. Therefore we propose directly localizing perspective query images in panoramic images by applying sliding windows on the last convolution layer of CNNs. Features are extracted with R-MAC, GeM, and SFRS. Experimental results showed that the sliding window approach outperformed 4-gnomonic views, and we get competitive results compared with 8 and 12 gnomonic views. Any city-scale visual localization system has to be robust against long-term changes. Semantic information is more robust to such changes (e.g. surface of the building), and the depth maps provide geometric clues. In our work, we utilized semantic and depth information while pose verification, that is checking semantic and depth similarity to verify the poses (retrievals) obtained with the approach that use only RGB image features. Semantic and depth information are represented with a self-supervised contrastive learning approach (SimCLR). Experimental results showed that pose verification with semantic and depth features improved the visual localization performance of the RGB-only model.