Loading...

Summer Research Fellowship Programme of India's Science Academies

Microscopic image processing

Purvi Lohana

Government College of Engineering, Amravati. Kathora Naka, Amravati, Maharashtra 444604

Dr. P. Rajalakshmi

Indian Institute of Technology, Hyderabad

Abstract

The full scope of medical imaging technologies has become a core component of a large number of diagnostic procedures in current clinical practice, as well as medical and biological research. An area of digital image processing in which numerous image processing techniques are used and applied on the images captured by a microscope is known as microscopic image processing. Due to technical advances, it has become easier to interface a microscope to image processing techniques and the system. It has been progressively utilized in various diverse fields such as medicine, biomedical research, test-tube baby, cancer research, drug testing and so forth. This would be an overview and a classification of many image processing methods on the microscopic images which will be of great importance in the medical field. The processing performed on the microscopic images include image acquisition, various techniques like- some segmentation methods, enhancement, normalization, convolution, image restoration, image compression, edge detection, and its analysis. Each technique has its algorithm with its ability and features for analysis of grey images. This can be used for better analysis in a clinical diagnostic procedure like to magnify the tissues and organisms which are too minute to be seen clearly with the naked eye. These techniques are to increase the visibility of the cell structures and organelles by modifying the spatial characteristics of the microscopic image. This exploration will be a wonderful advance towards the improvement of good medical facilities in India.

Keywords: microscopic image processing, biomedical research, image acquisition, clinical diagnostic

INTRODUCTION

The variations of the basic microscopic instruments enables us to look into spaces much small to be seen with our naked eyes. So the Microscopic Image Processing has increasingly used in biomedical research and clinical medicine or suppose as Medical Image processing. Over the years these image processing techniques are contributing a lot in medical applications for example, the use of image segmentation, image registration, and the image guided medical surgery. In olden times the most used was X-Ray which is prevalent even today. The other imaging technique is Magnetic Resonance Imaging (MRI) which works on magnetic characteristics and provides information about internal organs of the body. The microscopic imaging can be utilized to burrow the quantitative and subjective data about the specimen from a microscope image. It transforms an image so that the displayed image is much more informative than the original one. The main aim is to increase the visibility of object/region of interest from the image. Image processing tools provides a thorough arrangement of reference i.e. standard algorithms, functions, some software for image processing, analysis, and visualization. In this paper, we portray several image processing methods applied on microscopic image processing which have been utilized in medical image analysis.

The typical image processing framework comprises of three stages. Firstly, the image is captured through the microscope. This involves maybe the best quality microscopic camera as hardware. Secondly, there involves the application of various image processing techniques on the dataset. Lastly, the image data which is processed is used for analysis. This paper specifically focuses on the second stage of image techniques on the microscopic images. Analysis of microscopic images include the techniques as follows- Normalization, image segmentation (global thresholding, adaptive thresholding), resizing, image filtering, edge detection, image negatives, and image enhancement (contrast stretching).

Screenshot (82).png
    A General Image Processing Framework

    This paper also includes the prototype for the embedded part of Internet of Things (IoT) where the interfacing of NodeMCU and the Arduino with the sensor is performed. There is involvement of an external camera for capturing of the normal images (but not the microscopic ones) on which the processing can be done and studied. Together all this can be combined to send processed microscopic image data over IoT cloud resulting in a useful practical application in medical sector. The explosive growth of IoT is changing our world and its numerous applications can be of great importance in medical and biological research.

    DEALING WITH EMBEDDED PART

    Interfacing of Arduino with the NodeMCU involves three different types of communication.

    I2C (Inter-Integrated Circuit) communication – Multiple masters can control multiple slaves. SCL (Serial clock) and SDA (Serial Data) are the physically connected lines over which the data is transferred. It is serial communication protocol over which data is transferred bit by bit along a single wire. It is a synchronous communication protocol.

    UART communication - UART stands for Universal Asynchronous Receiver/Transmitter. It transmits and receives serial data. Data flows from the Tx pin of the transmitting UART to the Rx pin of receiving UART. It is asynchronous communication protocol. Data is transmitted and received in the form of data packets.

    SPI (Serial Peripheral Interface) communication. - Any number of bits can be sent or received in a continuous stream without any interruption. Devices are in master-slave communication. One master can control more than one slaves. It is a synchronous communication protocol.

    Here used is the UART communication protocol in this project.

    The list of components used is as follows:

    1. Arduino board as controller

    2. NodeMCU (ESP8266) as controller with Wi-Fi module

    3. LM35 (the temperature sensor)

    4. Jumper wires

    5. Logitech C170 Webcam

    UART communication (simply know as Serial Communication) between Arduino and NodeMCU (ESP8266) was done by connecting Tx and Rx ports on the controller boards. Here Arduino is collecting sensor data and transmitting it to NodeMCU. The connections are made according to the circuit diagram given below. The ground potential must be made common for all.

    LM35 is an integrated analog temperature sensor. Its electrical output is proportional to Degree Centigrade. The sensor does not require any external calibration to provide typical accuracies. It is rated for full -55⁰C to 150⁰C range. It has 3 pins. Ground, Vcc(2.7V-5.5V), Analog voltage out. Analog voltage out is connected to A0 on Arduino.

    Screenshot (97).png
      Circuit Diagram 

      The set-up interfacing LM35 sensor with Arduino in communication with NodeMCU is as shown in the image below.

      IMG_20190624_153225__01__01_1.jpg
        Set-up in the Lab

        Source code written in the Arduino software gives required output.

        Screenshot (98).png
          Output on serial monitors of Arduino and NodeMCU

          Image Acquisition

          While working on the prototype, there is no microscopic image included. But the images were captured by a Logitech external Camera connected to the computer. The processing was even done on these images and then applied on the microscopic images available from some other sources.

          Camera captured the images and then processed which gives the output as shown below.

          webcam_image_proc.png
            Image processing on normal image captured by external camera.

            IMAGE TECHNIQUES WITH RESULTS

            Edge Detection

            Points at which the brightness of the image changes sharply are identified as an organized set of straight or curved line segments termed edges. Edges are significant highlights for investigating pictures. Edge detection is considered as the first step in recovering information from images. An edge in a picture is a significant local change in the image intensity, generally connected with a discontinuity in either the image intensity or the first derivative of the image intensity. There are two types of discontinuities involved in image intensity. 1) Step discontinuity and 2) Line discontinuity.

            Steps involved in edge detection contain Filtering, Enhancement, and Detection.

            Canny edge detection is a technique to extract helpful auxiliary data from different vision objects significantly lessen the measure of data to be processed.

            Canny edge detection algorithm.

            The process can be divided into 5 steps:

            1)     Figuring the level (Gx) and vertical (Gy) slope of every pixel in a picture.

            2)     The magnitude and direction is calculated using the information of slope.

            3)     Apply non-maximum suppression to get rid of deceptive reaction to edge detection.

            4)     High and low thresholds are determined by applying double threshold.

            5)     The detected edges are finalized after tracking to ensure that the weak edges are not connected to strong ones.

            The given input is an image which results in an edge map as output, in edge detection.

            Results:

            Screenshot (85).png
              Processed image of Mitosis under Microscope

              Image Enhancement

              Image enhancement is fundamentally improving image impression or perception of data in images for human viewers providing a better contribution to other automated image processing techniques. The enhancement techniques are performed to alter the image brightness, contrast or the distribution of grey levels (dim dimensions). As an outcome, the pixel intensities of a resulting image will be altered according to transformation done to the input image. Enhancement by point processing includes image negatives and contrast stretching.for

              Unfortunately, there is no broad hypothesis for figuring out what is 'great' image enhancement with regards to human observation. On the off chance that it looks great, it is great! However, when the image enhancement techniques are utilized as pre-processing tools for microscopic image processing techniques, at that point the quantitative measures can determine which techniques are most appropriate to extract the most information out of it.

              The transformation function has been given below

              s = T (r)

              where r is the pixels of the input image

              s is the pixels of the output image

              T is a transformation function that maps each value of r to each value of s.

              Contrast stretching

              Contrast stretching approach is regularly used to upgrade low contrast images that may result from poor light, absence of dynamic range in the imaging sensor, wrong lens aperture setting during image acquisition, etc. in basic words, to stretch out the dynamic scope of intensities implies the intensity range should be very much spread over the entire image. It is a significant preprocessing step for the analysis of microscopy images. The principle point of image enhancement techniques is to build the visibility of cell structures and organelles by modifying the spatial attributes of an image.

              The algorithm:

              1.     Read the microscopic image

              2.     Extract green channel (it suggests there is much info in green channel)

              3.     For contrast stretching, use log transform.

              4.     Global thresholding is to be done.

              Logarithmic Transformations can be utilized to increase the intensities of the image (like the Gamma Transformation, where gamma < 1). All the more frequently, it is utilized to build the detail (or contrast) of lower intensity values.

              The general form of the log transformation is

              s = c * log (1 + r)

              where s and r are the pixel values of the output and the input image and c is a constant.

              Results:

              log_transform.png
                Light micrograph of human ovum (egg) and sperm.(Processed)

                Image negatives

                A normal image is called as positive. A negative image is completely inverse of it i.e. the dark areas appear light and the light areas dark. Negatives of microscopic images are helpful in various applications, for example, displaying medical images and capturing a screen with monochrome positive film with utilizing the subsequent negatives as normal slides.

                It is a linear and negative transformation which is an invert of identity transformation. In negative transformation, each value of the input image is subtracted from the L-1 and mapped onto the output image.

                Image negative is produced by subtracting each pixel from the maximum intensity value. e.g. for an 8-bit image, the max intensity value is 28– 1 = 255, thus each pixel is subtracted from 255 to produce the output image.

                Thus, the transformation function used in image negative is

                s = T(r) = L – 1 – r

                negative_LI.jpg
                  Light micrograph of human ovum (egg) and sperm(Processed).

                  Image Segmentation

                  Segmentation partition an image into particular areas containing each pixel with similar attributes. To be important and valuable for microscopic image analysis and interpretation, the regions should relate to portrayed objects or features of interest. The objective of segmentation is to simplify as well as change the portrayal of an image into something that is progressively important and simpler to analyze. Pixels in a region are comparative as indicated by some homogeneity criteria, for example, color, intensity or texture to find and distinguish objects and limits/boundaries in an image. Segmentation has its significant applications in medical imaging, object detection, computer-aided surgery, image deblurring, microscopy, etc. The consequence of image segmentation is a lot of segments that collectively spread over the entire image, a set of contours extracted from the image as in edge detection.

                  The basic segmentation technique is depicted here onwards.

                  Thresholding

                  Thresholding is the simplest segmentation technique most widely used. It helps separate the frontal area from the background. The thresholding technique is used for converting a multilevel/grey image into a binary one. The advantage is that it reduces the complexity of the data and simplifies the procedure or we can say improves the strategy of recognition and classification. Thresholds are either local or global which means they can be constant throughout the image or spatially varying.

                  A thresholded image can be defined as

                  g(x,&ThickSpace;y)={1,if&ThickSpace;f(x,y)&gt;T0,if&ThickSpace;f(x,y)Tg(x,\;y)=\left\{\begin{array}{lc}1,&amp;if\;f(x,y)&gt;T\\0,&amp;if\;f(x,y)\leq T\end{array}\right.

                  where 1 is object and 0 is background

                  There are 3 methods described here with the results when applied to microscopic images.

                  ·       Global thresholding : T is constant applicable over an entire image

                  ·       Adaptive thresholding : T depends on spatial coordinates(x,y)

                  Global thresholding

                  This method assumes that the image has a bimodal histogram and therefore the object can be differentiated out from its background by the simple technique which compares image values with a threshold value. A threshold value when considered, the pixel values are given 0 value below it (or a 255 according to requirements). This threshold is global in the sense that it is applied to all the pixels.

                  Global thresholding is computationally quick and easy. It functions better on images that contain objects with uniform intensity values on a differentiating background. However, it may fail if there is a low contrast between the object and the background, if the image has noise or if the background intensity changes significantly in the image.

                  Algorithm

                  1. Select an initial value for T

                  2. Segment the image using T value, in two groups i.e. group A for values greater than T and group B for values less than or equal to T

                  3. Calculate the average intensity values for A, B that are m and n respectively.

                  4. Compute a new threshold value T = half of (m+n)

                  5. Keep doing step 2 and 4 until the difference in T in successive iterations is smaller than ∆T

                  globL.png
                    Processed image of Mitosis under Microscope

                    Adaptive thresholding

                    Adaptive thresholding commonly takes a grayscale or shading image as information and, in the simplest implementation, yields a binary image that represents the segmentation. There is a calculation of threshold value for each pixel in the image in adaptive thresholding. If the value of a pixel is below the threshold value, it is set to the background value. If the pixel value is above the threshold, it assumes the foreground value.

                    One of the approaches to find the local threshold is to measurably look at the intensities of the local neighborhood of every pixel. The results of this technique depend largely on the input image. Easy and quick methods include the mean of the local intensity distribution.

                    T = min+max2\frac{min+max}2

                    For the good threshold to be chosen, the size of the neighborhood has to be large enough to cover adequate foreground and background pixels. On the other hand, picking regions that are too enormous can damage the assumption of approximately uniform illumination.

                    adaptive1.png
                      Processed image of Mitosis under Microscope

                      Image Normalization

                      Normalization of a microscopic image can be defined in two ways. First is to ‘cut’ the values which are extremely high or extremely low. This means that if the image has negative values, they will be set to zero and on the other hand if the image has more than the maximum value, it is set to a maximum value. Second one is to linear stretch all the values to fit them into the interval of zero to the maximum value.

                      In general, normalization changes the range of pixel intensity values.

                      Normalization transforms an n dimensional image I with the intensity values in range {Min,….,Max} into a new image of range {newMin,…,newMax}.

                      The formula according to which the image processing is performed is

                      IN=(IMin)newMaxnewMinMaxMin+newMinI_N=\left(I-Min\right)\frac{newMax-newMin}{Max-Min}+newMin

                      Screenshot (92).png
                        Light micrograph of human ovum (egg) and sperm(Processed).

                        Image Filtering

                        Image filtering technique is used for modifying or enhancing an image that includes highlighting certain features, or remove other features or smoothing, and sharpening. This is aimed at eliminating the undesirable characteristics. Image filtering can be done in two domains. 1) Spatial domain 2) Frequency domain.

                        Sharpening

                        Sharpening can also be done using the sharpening kernel. Here it is performed on a microscopic image using filtering in 2D. It follows a certain procedure.

                        Weighted averaging = 2D linear convolution

                        g(m,n) = l=l0l1&ThickSpace;&ThickSpace;&ThickSpace;k=k0k1h(k,l)s(mk,nl)\sum_{l=l_0}^{l_1}\sum_{\;\;\;k=k_0}^{k_1}h(k,l)s(m-k,n-l)

                        In 2D frequency domain G(f1,f2)\begin{pmatrix}f_1,&amp;f_2\end{pmatrix}​=H(f1,f2)\begin{pmatrix}f_1,&amp;f_2\end{pmatrix}​ S(f1,f2)\begin{pmatrix}f_1,&amp;f_2\end{pmatrix}​​

                        Frequency response of the 2D Filter

                        H(f1,f2)\begin{pmatrix}f_1,&amp;f_2\end{pmatrix}=m=l0l1&ThickSpace;&ThickSpace;n=k0k1\sum_{m=l_0}^{l_1}\sum_{\;\;n=k_0}^{k_1}h(m,n)ej2π(f1m+f2n)e^{-j2\mathrm\pi\begin{pmatrix}{\mathrm f}_1\mathrm m+&amp;{\mathrm f}_2\mathrm n\end{pmatrix}}

                        H(f1,f2)\begin{pmatrix}f_1,&amp;f_2\end{pmatrix} is periodic, only needs to look at the square region.

                        Sharpening kernel used here is[[111][191][111]]\begin{array}{ccc}\lbrack\lbrack-1&amp;-1&amp;-1\rbrack\\\lbrack-1&amp;9&amp;-1\rbrack\\\lbrack-1&amp;-1&amp;-1\rbrack\rbrack\end{array}

                        Screenshot (94)_LI.jpg
                          Mitochondrion

                          DISCUSSION

                          There are various other segmentation and enhancement and filtering techniques for 2D as well as 3D images. These can be applied to the microscopic images. These days, even machine learning has attracted interest in medical image computing and processing. These medical images are mostly microscopic images. In particular, the level of advancement in the technical field can effectively and efficiently handle the complexity and diversity of microscopic images. This paper on image analysis aimed towards extracting useful information from microscopic images in the best possible way. Some researchers say color segmentation is much more accurate because of more information at the pixel level than that in the greyscale images. Typical microscopic image analysis includes figuring where the edges of an object are, counting similar objects, calculating the area, perimeter length and other useful measurements of each object.

                          CONCLUSION

                          Image processing performed on microscopic images marked to an appropriate and necessary method. This helped in extracting meaningful information, gave accurate analysis, highlighted meaningful aspects of an image and improved aesthetic appearance and did not interfere with the results. The main aim of representing reproducible results and not altering the inherent information in the image was achieved. These image processing techniques have enabled certain practical limitations of the microscope (because of flaws in the optical framework) to be largely overcome. A detailed structure of membranes and organelles in cells is determined by the 2D imaging techniques described above.

                          ACKNOWLEDGEMENTS

                          First and foremost I express my deep sense of gratitude to the Indian Academy of Sciences (IASC-INSA-NASI) for allowing me to carry out this project in the Summer Research Fellowship Programme-2019.

                          I owe my heartiest gratitude to respected Dr. P. Rajalakshmi, Indian Institute of Technology, Hyderabad for the continuous support, motivation, and immense knowledge. I could not have imagined having a better advisor and mentor for the project.

                          I thank the Indian Institute of Technology, Hyderabad to make possible to work here at the WiNet Lab. I thank every one who was part of this lab and helped me to get over the difficulties during the project.

                          It is my privilege to express my regards to Dr. Dinesh V. Rojatkar for providing the letter of recommendation to the Indian Academy of Sciences. I also thank my institute Government College of Engineering, Amravati for allowing me to pursue this fellowship.

                          I am glad I used Authorcafe for writing my report and I would recommend this to my peers, provided some minor bugs are fixed.

                          I express my deepest sense of gratitude towards my beloved parents, fellow interns, and friends for their constant support and encouragement.

                          REFERENCES

                          1.    “Microscopic image processing” available at: www.en.wikipedia.org​

                          2.     N.M. Chaudhari & B.V. Pawar (2015) Microscope Image Processing: An Overview International Journal of Computer Applications (0975 – 8887) Volume 124 – No.12

                          3.     Alireza Norouzi, Mohd Shafry Mohd Rahim, Ayman Altameem, Tanzila Saba, Abdolvahab Ehsani Rad, Amjad Rehman, et al (2014) Medical Image Segmentation Methods, Algorithms, and Applications IETE Technical Review 31(3):199-213

                          4.     Dogan, H., Ekinci(2014) M.: Automatic panorama with auto-focusing based on image fusion for microscopic imaging system. SIViP, Special Issue on Microscopic Image Processing. 8(S1).

                          5.     Sasikumar Gurumurthy & B.K.Tripathy(2011) Study of Microscopic Image processing , National Conference “Microwave Antenna and Signal Processing” [NCMASP-2011], At India, Volume: 1

                          6.     Cristian Smochina, Paul Herghelegiu and Vasile Manta (2011) Image processing techniques used in microscopic image segmentation.

                          7.     T. Pavlidis(1982), Algorithms for Graphics and Image Processing, Computer Science Press, Rockville.

                          8.     J. Parker(1991) Gray level thresholding in badly illuminated images, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 8, pp. 813-819.

                          International J. of Healthcare and Biomedical Research, Volume: 2, Issue 3 (8-11)

                          10. Serdar Cakir, Deniz Cansen Kahraman, Rengul Cetin Atalay, A. Enis Cetin (2018), Contrast Enhancement of Microscopy Images Using Image Phase Formation, IEEE Access Volume 6 : 3839- 3850.

                          11. E. Santos Filho, J.A. Noble, D Wells(2010) A Review on Automatic Analysis of Human Embryo Microscope Images, Open Biomed Eng. Journel 2010; 4: 170–177/.

                          12. Md. Atiqur Rahman Ahad, Syoji Kobashi, Joao Manuel R S Tavares(2018) Advancements of Image Processing and Vision in Healthcare, Journal Healthcare Engineering.

                          13. Nilanjan Dey, Amira S. Ashour, Ahmed S. Ashour, Aarti Singh(2015 )Digital Analysis of Microscopic Images in Medicine,Journal of Advanced Microscopy Research10(1):1-13.

                          14. Xinyu Huang, Chen Li, Minmin Shen, Kimiaki Shirahama, Johanna Nyffeler, Marcel Leist , et al(2016) Stem cell microscopic image segmentation using supervised normalized cuts, IEEE International Conference on Image Processing (ICIP)​

                          15. Prof. D. Cavouras, Medical Image Processing.

                          ​ http://www.bme.teiath.gr/medisp/downloads/education/en_NOTES_IMAGE_PROCESSING_CAVOURAS.pdf

                          16. https://ai.stanford.edu/~syyeung/cvweb/tutorial1.html​

                          17. http://www.eie.polyu.edu.hk/~enyhchan/imagee.pdf​

                          18.  https://www.slideshare.net/tawosetimothy/image-segmentation-34430371

                          19. https://ai.stanford.edu/~syyeung/cvweb/tutorial1.html​

                          20. Machine Vision Book.H

                          SOURCES

                          Source of original input images
                           IMAGE DESCRIPTIONSOURCE 
                           Fig. 6Mitosis under Microscope https://www.shutterstock.com/search/embryo  
                           Fig. 7Light micrograph of human ovum (egg) and sperm http://cellimagelibrary.org/home 
                            Fig. 8Light micrograph of human ovum (egg) and sperm http://cellimagelibrary.org/home 
                            Fig. 9Mitosis under Microscope https://www.shutterstock.com/search/embryo  
                            Fig. 10Mitosis under Microscope https://www.shutterstock.com/search/embryo  
                           Fig. 11 Light micrograph of human ovum (egg) and sperm http://cellimagelibrary.org/home 
                           Fig. 12Mitochondrion  http://cellimagelibrary.org/home

                          More
                          Written, reviewed, revised, proofed and published with