• procedures installed on the operator’s computer (visualization, re-cording film documentation). Fig.1. Single camera is placed in front of the robot All procedures of VS were elaborated within Matlab and C++, and operate under Linux. Three modes of VS operation are possible [2] [3]: • manual –images (films) are visualized on the operator’s monitor and film documentation is recorded. The robot is controlled by an operator. Image processing and recognition is inactive. • autonomous – recorded images are sent to the operator’s computer as single images (they are selected after decompression realized on the board computer). Image processing and recognition is active. • „with a teacher” – films are sent to the operator’s computer. Robot is controlled by the operator. Realization of operator’s commands is synchronized with activation of image and recognition proce-dures. A goal of this mode is to gather information on robot con-trol. The information is used in a process of self-learning that is in-the middle of elaborating. 2. Image registration, transmission and visualization Since duct interiors are dark, closed and in most cases duct walls are shiny a way of lightening has a significant meaning [4]. Image recording was preceded by examination of different lightening. Most important demands were small size, low energy consumption. Several sources were tested (Fig.2) (a few kinds of diodes, and white bulbs). As the result a light source consisting of 12 diodes was selected (it is used in car headlights). Image recording was performed by a set consisting of a digital camera and (520 TV lines, 0.3 Lux, objective 2,8 mm, vision angle 96 degrees) and frame grabber CTR-1472 (PC-104 standard). Film resolution is 720x576 Environment detection and recognition system of a mobile robot for inspecting ...
pixels in MPEG-4 compressed format [1]. Transmission is active in the manual mode. In autonomous mode decompression and selection are per-formed. An image is processed only in case a change in the robot neighborhood occurred. a) b) c) d) Fig.2. Selected examples of light sources, a) white LED, b) blue LED, c) LED headlamp, d) car headlight. Only images transmitted to the operator’s monitor can be visualized. De-pending on a mode the operator observes transmitted films (manual and „with teacher” modes) or single decompressed images (autonomous mode). Transmitted images are stored as a film documentation. Collecting such documents is one of the main tasks of the robot [2]. 3. Image processing and analysis Two main approaches to image analysis were elaborated. A goal of the first one was to extract some image features. Monochrome transformation, reflection removal, binarization, histogram equalization and filtration were applied. Results of these procedures were shown in Fig. 3. Fig.3. Examples of images and results of their processing Analysis procedures were applied to the images shown in Fig. 3. Results of analysis are 5 features (shape factors and moments) calculated for identi-fied objects. These features have different values for different duct shapes and obstacles what makes it possible to identify them. However, per-formed research shown that image recognition on the basis of these values in case of composed shapes (curves and dimension changes) does not give expected results. Moreover it requires that advanced and time consuming 0A. Timofiejczuk, M. Adamczyk, A. Bzymek, P. Przystałka
methods of image processing have to be applied. As the result of that an-other approach was elaborated. Images were resampled in order to obtain possible low resolution that was enough to distinguish single objects visi-ble in the image (Fig. 4). These images were inputs to neural networks ap-plied at recognition stage. a) b) c) d)Fig.4. Resolution resampling a) and c) 720x576, b) and d) 30x30 4. Image recognition Image recognition was based on the application of neural networks that were trained and tested with the use of images recorded in ducts of differ-ent configuration (Fig.5). Fig.5. Ventilation ducts used as test stage [PP1]A library of images and patterns was elaborated. As a result of a few tests of different neural networks a structures consisting of several three layer perceptrons was applied. Each single network corresponds to a distin-guished obstacle (shape or curve). Depending on the image resolution a single network has different numbers of inputs. The lowest number (shapes are distinguishable) was 30x30 pixels. All networks have the same struc-ture. It was established by trial and error, and was as follows: the input layer has 10 neurons (tangensoidal activation function), the hidden layer has 3 neurons (tangensoidal activation function) and the output layer has 2 neurons (logistic activation function). As training method Scaled Conju-Environment detection and recognition system of a mobile robot for inspecting ...1
gate Gradient Algorithm was used. For each network examples were se-lected in the following way: the first half of examples – the shape to be recognized and the second half of examples – randomly selected images representing other shapes. This approach is a result of numerous tests and gave the best effect. It must be stressed that results of neural network testing strongly depend on lightening and camera objective, as well as a number of examples and first of all image resolution. Results of present tests made it possible to obtain classification efficiency about 88%. Such low image resolution and numbers of neurons in a single network required that 54000 examples had to be used during network training. In order to increase the number of test-ing images a few different noises were introduced to images. 5. Summary The most important factor that influences recognition correctness is too low resolution. However, its increase leads to non-linear decrease of a number of examples necessary to network training. At present stage of the research the application of cellular network is tested. One expects that out-puts of these networks can be applied as inputs to the three layer percep-tron. The most important is that these outputs seem to describe shapes more precisely than shape factors and moments and simultaneously their number is lower then the number of pixels of images with increased reso-lution. References [1] M. Adamczyk: “Mechanical carrier of a mobile robot for inspecting ventilation ducts” In the current proceedings of the 7th International Conference “MECHATRONICS 2007”. [2] W. Moczulski, M. Adamczyk, P. Przystałka, A. Timofiejczuk: „Mobile robot for inspecting ventilation ducts” In the current proceedings of the 7th International Conference “MECHATRONICS 2007”. [3] P. Przystałka, M. Adamczyk: “EmAmigo framework for developing behavior-based control systems of inspection robots.” In the current proceedings of the 7th International Conference “MECHATRONICS 2007”. [4] A. Bzymek:”Experimental analysis of images taken with the use of different types of illumination” Proceedings of OPTIMESS 2007 Workshop, Leuven, Belgium.
因篇幅问题不能全部显示,请点此查看更多更全内容
Copyright © 2019- 517ttc.cn 版权所有 赣ICP备2024042791号-8
违法及侵权请联系:TEL:199 18 7713 E-MAIL:2724546146@qq.com
本站由北京市万商天勤律师事务所王兴未律师提供法律服务