Image Processing

Image Processing

Introduction

Digital image processing is a type of two-dimensional (2D) image processing using a digital computer. The techniques used in image processing involves treating the image as a two-dimensional signal and applying the usual techniques of signal processing as a function of x and y coordinates in which 2D and three-dimension (3D) mapping is arrayed to image data. However, digital image undergoes fundamental stages, which takes diverse steps in order to obtain the required image data. This process improves the image quality for human perception or computer interpretation. Digital images are processed through computer algorithms in order to obtain the required information, which is comprehensible. Further calculations and decisions can be made from the additional information extracted. Thus, various fundamental steps including image acquisition, image enhancement and color image processing and many others are vital in understanding digital image processing.

Fundamental steps in digital image processing

There are a number of steps followed in processing of digital images. First is image acquisition. Image acquisition enables us to acquire the image data, which is arranged in a row form. The image data, which is arranged in a row form, is then converted in a kind of computer readable system that uses further hardware system. The row image data and the usual image file design are usually supported by the computer software system. Thus, the image obtained from the computer software is then processed further to produce the desired image [3].

Image enhancement is the next step of image processing. It is the simplest and most interesting stage of digital image processing. At the image enhancement step, images are preprocessed in a way that increases the chances for success of another process. This step enables us to enhance the concealed element in the image in order to enlarge its visibility and the appealing highlight regions of the image [1]. Image contrast and increased brightness enhancements are among the examples of image enhancements.

Image restoration is another processing step of digital images. It also deals with enhancement of images but its particular work is noise removal that may be caused by different imaging setup. The kind of noise that is removed is white Gaussian noise, which results during image data transmission. In addition, this noise may also be some sort of indistinctness or shaking effect that may be caused due to bad image set up. The system that causes this noise is not yet known [3]. Thus, there is need to correct these artifacts by introducing a probabilistic and mathematical model.

The next step is color image processing and it deals with the processing of colored images. This is whereby only images that are supposed to be produced in a colored form are processed. Due to increasing use of pictorial data on the Internet, this field of processing digital images is becoming more common.

Additionally, wavelet is another fundamental step in digital image processing. It is a mathematical function whereby the image data is separated into small frequency components. The images are divided successfully into smaller parts and each part is studied using signal scale or images, which matches with its resolution [6]. Wavelets have diverse application methods in the field of digital image processing.

One of the major applications of wavelets in digital image processing is compression, the next step in the process. Compression is whereby the size of image data is reduced to a required size. Before data transmission, compression of images is done because of large amounts of pictorial data that is used on the internet. The aim of compressing images is to save the bandwidth required to convey data over the transmission row. Image compression is vital in storage, memory and data system. This is because it enables a lot of information to be stored in a small space. Over the past years, storage devices were small unlike in the current situation where storage devices are big thus can accommodate many compressed images.

Morphological processing is another process that deals with extraction of diverse objects of interest in image processing. Its work is to remove the unwanted objects from the image. Consequently, two sliced components of an object are merged through a morphological processing. This is through use of opening and closing morphological operations, which base on dilation and erosion [4].

Image segmentation, the next step, is whereby images are separated into two different regions. In this case, the computer divides objects separately from the image background. It is one of the most difficult tasks in image processing. In other words, each part consists of connected pixels and each of them holds a label in which they belong. This step is significant especially when using automated computer applications since it extracts different objects that are further processed.

Representation and description is another process that comes after segmentation process has been carried out. However, this step depends on the computation method and application format to be used since the data is arranged in the raw form. The parts connected to the pixels are defined by the computation and application method to be used. Where the external shape of objects is required, the application of boundary representation is used. However, in case the properties inside the region need to be studied, then the use of region type representation is used. Before further process, the description of these regions is done. Some properties, which are associated with the various segments in the digital image, are described and highlighted. These properties associated with the regions include color, texture, length, ration, perimeter, breadth and width [2].

The final step of digital image processing is object recognition. In this case, different objects are classified based on features and attributes extracted from representation and description process. For instance, when comparing a square from a rectangle, one needs to compare the width and breadth of both objects and then simply classify them [3].

Acquisition of digital image

            Digital image acquisition is the mapping of three dimensions (3D) world’s vision to a 2D array of data image [2]. An array of photo sensors is used to acquire an image though some other techniques and configurations in which sensors are placed and can be used to obtain the images. The light sensors are used to generate a voltage, which is directly proportional to the intensity of light that strikes the images. Thus, the Analog Digital Conversion is required in the conversion of voltage to digital form through sampling and quantization processes.

The image on a 2D representation of 3D scene as shown in figure 1 (a) below is a function of two coordinate variables which is x and y [1]. Thus, f (x, y) is the point of light intensity, which means that when digitizing an image, it is vital to digitize both coordinates of x-y and amplitude intensity value. This process of digitizing this coordinates is known as sampling whereas quantization is a term for amplitude digitizing.

 

Figure 1

The continuous voltage signal in figure 1 (b) is directly proportional to light intensities, which runs through segment AB as indicated in figure 1 (a). Thus, digitization of both signals should be done along both dimensions. Digitization along the direction of x will therefore be obtained through taking signal samples at regular intervals along the horizontal line. The digitization process is referred to as sampling. The sample boxes at regular intervals are indicated by the white boxes that run along the intensity signal. The boxes that run vertically should be digitalized as shown in figure 1 (c) where the vertical scale has been separated in eight different grayscale levels. The vertical boxes can be assigned different grayscale values depending on the position in which they lie through a process known as quantization as indicated in figure 1 (d). Three methods of approximation including round off, floor, and ceiling schemes can be assigned to get the quantized values to the vertical sample points. To understand the quantization levels, we can consider figure (2) below which shows an X-ray image of a human skull, which is 452×374 pixels, represented in different grayscale level in 256 different grayscale levels. It should be noted that the imaging system could effect the image representation due to decrease on the available gray levels used as shown in figure (2).

 

Figure 2

Alpha Blending

            The MATLAB, which is a tool for testing and developing image processing algorithms, is a code given to alpha blending of images. The alpha blending has been given an equation that is applied to observe the behavior of images which include y=aA+) 1-a) B. This equation is used to analyze the effect on the changing values of alpha code. In case these equation is divided into two factors, then the result will be F1=aA and F2= (1-a) B=>Y=F1+F2. This means that there will be a decrease in F1 factor for lower values of alpha whereas the higher values of alpha will increase F factor and cause F2 decrease. Thus, the net result will be dominated when adding the two factors, which will cause an effect on the value of alpha as shown in figure 3.

 

 

Figure 3

Histogram equalization

            Histogram equalization is among the operations used to increase image contrast during digital image processing. It is used to determine the total length number of grayscale levels of an image. The histogram can have high peaks within a small range of grayscale when observing the histogram plot of low contrast images. Thus, it is vital to equalize the distribution of histogram values over the whole grayscale. This is done through linearization of commutative distribution function [5]. In order to increase the contrast of the camera operator as indicated in figure 5 below, MATLAB code is used for equalizing the image of the histogram in order to increase its contrast. The image of the camera operator is processed in a MATLAB using the ‘histeq’ function [7].

Consequently, another image shown in figure 6 below was taken in a different version and histograms of both images. Therefore, it can be observed that a small patch of grayscale values lies between 100 to 180 values of pixels. When one observes the images after the histogram equalization, there is a better contrast of images and the histogram is now spread equally. Therefore, almost all the pixel values lie between 90 and 135 on the grayscale values after observing the histograms. This is why quite dull original images and close pixel values are usually difficult to differentiate between diverse regions of the image.

Conclusion

Various fundamental steps including image acquisition, image enhancement, and color image processing are vital in understanding digital image processing. They enable us to understand the every detail and process images takes before they are being completely developed. Moreover, digital image acquisition is the mapping of three dimensions (3D) world’s vision to a 2D array of data image. An array of photo sensors are used to acquire an image though some other techniques and configurations in which sensors are placed can be used to obtain the images. Consequently, the MATLAB, which is a tool for testing and developing image processing algorithms, is a code given to alpha blending of images. It is vital because it enables us to understand the way images are processed and their resulting effect. Lastly, histogram equalization is among the operations used to increase image contrast during digital image processing. It is therefore crucial because it enables us to determine the total number of grayscale levels of an image.

 

Figure 5                                                                        Figure 6

References

[1]  R.C. Gonzalez and R.E. Woods, Digital Image Processing, 2nd Ed., Prentice-Hall, 2002. (ISBN: 0-130-94650-8)

[2] T. Acharya and Ajoy K. Ray, Image Processing Principals and Applications Hoboken , NJ : Wiley-Interscience, 2005 (ISBN: 0-471-71998-6)

[3] Anil K. Jain, 1989, Fundamentals of Digital Image Processing. New Jersey, Prentice-Hell, Inc.

[4] Rafael C. González, Richard Eugene Woods, Steven L. Eddins. Digital Image processing using MATLAB.

[5] Wilhelm Buger. Mark J. Burge. 2009. Principles of Digital Image Processing.

[6]The MathWorks, Inc. Image Processing Toolsbox 7.0. http://www.mathworks.com/products/image/. Accessed 31st of July.

 

Still stressed from student homework?
Get quality assistance from academic writers!

WELCOME TO OUR NEW SITE. We Have Redesigned Our Website With You In Mind. Enjoy The New Experience With 15% OFF