- 10th May 2018
- Posted by: Manolis
- Category: Blockchain
Today, the world is dominated by artificial intelligence (AI) where human minds are challenged by this technology. There has been a development of new AI powered devices which tries to understand what they are seeing. Now the question arises why a machine needs to see? This article will explain machine vision, its importance and latest application of AI in machine vision.
What is Machine Vision?
Machine vision is the ability of a computer to see with one or more digital cameras and performing analog to digital conversion (ADC) and digital signal processing (DSP). The resulting data will be passed onto a computer or a robot controller.
Why are we imparting our primary sense to machines? For machines to relate to human mind they should also perceive the visual world like us. This can be in the form of a small camera which helps the machine to see and understand the world around them. Machine vision is a mushrooming branch of AI that aims to give machine a sense of vision similar to that of humans. Integrating specialised neural network to machine helps them to identify and understand images from the real world.
The basic image classification is easier for machines but it challenges them to extract meaning or information from abstract images. A common example to understand the difference between machine vision and human vision is comparing flight of plane and that of bird. Both will depend on principle of physics which help them to lift to air but that doesn’t mean that plane would flap its wings to fly. Albeit human and machine vision may see same things and the images interpreted will have some commonalities at the end the results may differ drastically.
Artificial Intelligence in Machine Vision
Recently AI techniques have been integrated into several machine vision systems. Most of the application had tried to overcome the aforementioned challenges. The following are the applications of AI in machine vision:
• Pulnix ZiCam is a smart camera which doesn’t require but presented in good and bad parts and learns how to differentiate them using a hardware neural network. It consists of a recognition engine that extracts 64 features including histogram, profiles and pixel samples. These features are passed to the neural network, which has 74 outputs. So instead of giving just a pass or fail output, ZiCAM can be trained to separate products into up to 74 classes.
• Smart Search is an adaptive pattern-locating software, developed by Coreco Imaging as part of their Sherlock and MVTools vision packages. It comprises of a training wizard that is based on AI to facilitate system set-up. The developer only provides good and bad examples of the object to be checked and then Smart Search learns the characteristics of that object automatically.
• Neuro Check Compact is a smart camera programmed using mouse. The system consists of Neurocheck package that is able to compute a number of object features from regions of interest in an image. These features are then passed to the classification module, which determines the type of the region being observed.
• The Sightech Eyebot has features which uses a combination of fuzzy logic and neural networks, known as Neuro-Ram. The Eyebot is available in two formats, the Shape Eyebot and the Spectrum Eyebot. The Shape Eyebot recognises the shape of objects that are placed in front of the camera and can then detect the deviation from the learned shape. The Spectrum Eyebot learns the colour of objects that are shown to it. The output of the system can be an integer from 0 to 99 that indicates the status of the shown product.
• ILIB software contains pattern recognition, statistical, fuzzy logic and neural network tools. The neural network function of ILIB is provided by a multi-layer perceptron neural network with many configurable parameters, including learning rate, momentum, number of neurons, number of layers and activation functions. ILIB also contains pattern recognition techniques, such as minimum distance and K-nearest neighbour classification. Also, the distributions of feature vectors can be analysed by many statistical techniques such as ANOVA. The software also incorporates fuzzy logic classification functions and genetic imaging. In genetic imaging, the user provides the system with the original image and the image that is desired from image processing. Then, ILIB uses genetic algorithms to derive the convolution filter or sequence of filters required to convert the original image to the target one.
• The software of Braintech Inc. comprises of a vast range of AI techniques, such as neural networks, fuzzy logic, natural language processing, qualitative maths and genetic algorithms. This software has been developed for inspection tasks such as mould number recognition, brake shoe identification and positioning of cylinder heads for the automotive industry.
• The Common Vision Blox tool MANTO is developed based on research in AI, particularly work on the statistical theory of learning. It is able to cope with high degrees of image noise and is recommended for application in the food industry or for security technology.
The inclusion of technologies like neural networks, fuzzy logic and specialised hardware vision is bridging the gap between human vision and machine vision. This would revolutionise several industries like automotive, security and health. But eventually, robots will attain visual capabilities beyond human vision, making them more powerful to solve challenging task and operate autonomously. Time will tell whether AI is the best or worst thing to happen to humanity.