--- title: "biopixR - Introduction" author: "Tim Brauckhoff and Stefan Rödiger" date: "`r Sys.Date()`" output: html_document: toc: yes number_sections: yes pdf_document: toc: yes vignette: > %\VignetteIndexEntry{biopixR - Introduction} %\VignetteEncoding{UTF-8} %\VignetteEngine{knitr::rmarkdown} --- ```{r setup, include=FALSE} knitr::opts_chunk$set( echo = TRUE, warning = FALSE, message = FALSE, cache = FALSE, comment = NA, verbose = TRUE, fig.width = 5, fig.height = 5 ) original_options <- options() options(digits = 3) ``` # Getting started The `biopixR` package includes multiple images of microbeads as an example to demonstrate its analytical and processing abilities for biological imagery. This sample images display the package's features, enabling users to experiment with image analysis and manipulation within the contexts of biotechnology and life sciences. Researchers and practitioners can utilize this illustrations to comprehend the application of `biopixR` to their individual imaging requirements, whether pertaining to cell biology, microscopy, or any other biological imaging applications. ## First step: Import of an image The `biopixR` package features an import function called `importImage`. This function acts as a wrapper, integrating the capabilities of the `magick` and `imager` packages. Since most image processing operations rely on `imager`, the `importImage` function converts all formats into the `imager` class 'cimg'. The package supports importing digital images in various file formats, including Joint Photographic Experts Group (JPEG), Portable Network Graphics (PNG), Bitmap Image File (BMP), and Tagged Information Interchange Format (TIFF). ```{r} library(biopixR) path2img <- system.file("images/beads.png", package = "biopixR") beads <- importImage(path2img) ``` ```{r beads} plot(beads) ``` ```{r class} class(beads) ``` ## Detecting objects in an image The objective of this task is to extract important information from an image consisting of microbeads. As a preliminary step, it is essential to distinguish between individual microbeads and acquire their corresponding coordinates or positions. The `objectDetection` function can perform segmentation using either thresholding or edge detection. The thresholding method is particularly suited for images with high and inhomogeneous backgrounds, as it includes background correction by solving the Screened Poisson Equation before applying the threshold. This allows for the detection of low-contrast objects with inconsistent backgrounds, such as transparent microbeads. When edge detection is chosen, a modified Canny edge detector, provided by the `edgeDetection` function, is used. This modified function reconnects line ends to nearby contours, ensuring continuous contours even with lower smoothing settings. In summary, the `objectDetection` function gathers detailed information about the microbeads, enabling the identification and differentiation of individual objects. This process helps derive precise coordinates for each object in the image, which serves as the foundation for further analysis and characterization of the microbeads within the `biopixR` package. ```{r objectDetection} res_objectDetection <- objectDetection(beads, method = 'edge', alpha = 1, sigma = 0) ``` This function generates a list of objects. Let's examine the specific outcomes and explore methods for visualizing them, starting with the center coordinates of the microbeads: ```{r visualization_1} plot(beads) with( res_objectDetection$centers, points( res_objectDetection$centers$mx, res_objectDetection$centers$my, col = factor(res_objectDetection$centers$value), pch = 19 ) ) ``` Upon closer examination, it is evident that each individual microbead is identified accurately by a singular point at its center, and their distinctiveness is conveyed through varying colors, aligning with our intended objective. However, the identification of clotted microbeads, referred to as doublets or multiplets, deviates from the expected pattern. Notably, not every visually distinguishable microbead is marked with a distinct color. The observed behavior, where doublets are identified as a single entity, occurs because their edges disappear along the contact surface. The same principle applies to multiplets; the consecutive edges of clustered beads cause them to be treated as a single, larger object. Let's examine the next output from `objectDetection`. This function captures the coordinates of labeled regions, providing precise details about the position of each microbead. By leveraging another function within the package, `changePixelColor`, we can selectively color-specific coordinates in a 'cimg'. Thus, we can apply this function to highlight all the extracted coordinates in the microbead image and assess whether the outcome aligns with our expectations. ```{r visualization_2} changePixelColor( beads, res_objectDetection$coordinates, color = factor(res_objectDetection$coordinates$value), visualize = TRUE ) ``` The visual depiction shows that all relevant coordinates were successfully retrieved, with each variant (single microbeads, doublets, and multiplets) colored accordingly. As previously stated, these clotted microbeads should be excluded from further consideration. The difference in size serves as a critical factor for efficient sorting and subsequent analysis. Therefore, the selected parameter for addressing these microbeads will be their size. The next section will provide a detailed explanation of the `sizeFilter` application process. Before delving into the available filter functions in the package, let us first examine the internal visualization feature of the `objectDetection` function. The edges identified by the `edgeDetection` function are visually emphasized with color, simplifying the adjustment of the threshold parameter (`alpha`) in the `objectDetection` function. In addition, the identified centers are represented as green circles. This visualization is particularly useful in determining the smoothing factor (`sigma`). Sometimes, smoothing is necessary to improve the recognition of complete objects and prevent the marking of fragmented edges. ```{r visualization_3} res_objectDetection$marked_objects |> plot() ``` Nonetheless, a crucial differentiation occurs in obtaining the highlighted microbeads as a 'cimg', which opens up possibilities for the creation of an interactive tool using `tcltk`. This step facilitates the development of an interactive interface, empowering users to dynamically explore the adjustment of various variables and observe the corresponding shifts in detected microbeads. The interactive interface is presented through the `interactive_objectDetection` function within the `biopixR` package. As previously discussed, the 'edge' method requires `alpha` and `sigma` as input parameters, which significantly impact the final result. To simplify the process of determining these parameters and to facilitate automation and batch processing, two methods are provided for their automated calculation: - **Grid Search** (alpha = 'static'; sigma = 'static') - **Gaussian Processes for Pareto Front Estimation and Optimization** (alpha = 'gaussian'; sigma = 'gaussian') Both methods rely on a fitness function that extracts shape information using another function (`shapeFeatures`). This fitness function evaluates the results with different input parameters, assuming circular-shaped objects. While the grid search method can be time-consuming as it tests every possible combination, the Pareto front optimization method samples and analyzes a subset of combinations, estimating the optimal parameters more quickly. It should be noted that the threshold function can also be employed, which does not require any additional input. Although the threshold method is a valid approach for segmentation, it has the disadvantage of merging objects in proximity that would be considered distinct by the edge detector. Consequently, the decision between greater accuracy with parameter input or time-consuming calculation and the more straightforward thresholding approach depends on the user's specific requirements. ```{r} res_threshold_object <- objectDetection(beads, method = 'threshold') res_threshold_object$marked_objects |> plot() ``` ## Filter microbeads according to size and proximity As previously stated, it is crucial to remove doublets and multiplets before performing the analysis. This objective will be addressed in this section using the `sizeFilter`. The filter is applied to the image using previously obtained coordinates and centers, with specified lower and upper limits. If more objects are identified, automated limit calculation becomes available based on the interquartile range (IQR) of the size distribution. To simplify limit selection in cases of insufficiently detected objects, the function will issue a warning and generate a size distribution. This code will open an interactive `readline` prompt and ask whether the limits should be calculated automatically or adjusted based on the displayed size distribution plot. This interactive module can also be triggered by setting `lowerlimit` and `upperlimit` to 'interactive'. ```{r, eval=FALSE} res_sizeFilter <- sizeFilter( centers = res_objectDetection$centers, coordinates = res_objectDetection$coordinates, lowerlimit = "auto", upperlimit = "auto" ) ``` ```{r, echo=FALSE} res_sizeFilter <- sizeFilter( centers = res_objectDetection$centers, coordinates = res_objectDetection$coordinates, lowerlimit = 0, upperlimit = Inf ) plot(res_sizeFilter$centers$size, ylab = "size in px") ``` As shown by the size distribution, there are two larger objects (doublet - size: \>150 px; multiplet - size: \>400 px). Therefore, the limits will be set accordingly. In some cases, it can be difficult to achieve continuous edges around multiplets, which can lead to the detection of multiple small objects that correspond to the edges of the multiplet. To address this issue, it is possible to set a lower limit to exclude results that may be affected by this phenomenon. ```{r} res_sizeFilter <- sizeFilter( centers = res_objectDetection$centers, coordinates = res_objectDetection$coordinates, lowerlimit = 0, upperlimit = 150 ) ``` **visualization sizeFilter**: ```{r} changePixelColor( beads, res_sizeFilter$coordinates, color = "darkgreen", visualize = TRUE ) ``` The goal of excluding clotted microbeads from the analysis has been achieved successfully. As shown in the image above, the resulting data set now only includes individual microbeads. When microbeads are in proximity, they can induce fluorescence in each other. This phenomenon can lead to misleading signals and contribute to false positives during analysis. To prevent distorted results, the `proximityFilter` is used in subsequent steps. This function inspects each gathered center and surveys a defined radius for positive pixels. If another positive pixel from a different object is detected within this range, both objects are discarded because of their proximity. The radius can be selected manually or determined automatically. In the automatic calculation, the size of the remaining microbeads is determined in the first step. The radius is then calculated using the following formula, assuming a circular object: \[ \text{radius} = \sqrt{\frac{A}{\pi}} \] The function specifies that the scanned area from the center of the microbead is twice the radius, ensuring that the minimum distance to another microbead is half a microbead (only if radius = 'auto'). Note that the coordinates obtained from the `objectDetection` function should be used as they are not filtered and therefore include all coordinates. This ensures the accurate exclusion of microbeads that are in proximity to doublets or multiplets. ```{r} res_proximityFilter <- proximityFilter( centers = res_sizeFilter$centers, coordinates = res_objectDetection$coordinates, radius = "auto" ) ``` **visualization proximityFilter**: ```{r} changePixelColor( beads, res_proximityFilter$coordinates, color = "darkgreen", visualize = TRUE ) text( res_proximityFilter$centers$mx, res_proximityFilter$centers$my, res_proximityFilter$centers$value, col = "grey" ) ``` The chapter's objective has been accomplished, as demonstrated by the most recent plot. The `sizeFilter` successfully eliminated the doublet and multiplet from the dataset. Furthermore, microbeads that lacked at least half the size of a microbead between them were removed with the aid of the `proximityFilter`. To demonstrate this result, the `changePixelColor` function was once again utilized, coloring every remaining pixel. Consequently, the microbeads that remain are highlighted in dark green, indicating their successful passage through the filtering process. ## Displaying results To conclude this chapter, we need to extract meaningful information from the filtered data set. One of the most fundamental results to be displayed after applying a filter is undoubtedly the number of remaining and discarded objects. As the size of the objects has already been calculated in both algorithms, this information should also be included in the display. Moreover, the intensity of the signal is a crucial parameter for both microbeads and any fluorescent image. Finally, it may be of interest to calculate the area density, which represents the percentage of detected pixels (microbeads) relative to the pixel area of the entire image. To extract this information, the `resultAnalytics` function from the `biopixR` package is utilized. This function requires the data frame of the remaining coordinates, the unfiltered coordinates, and the original image as inputs. ```{r} result <- resultAnalytics( img = beads, coordinates = res_proximityFilter$coordinates, unfiltered = res_objectDetection$coordinates ) result$detailed ``` While it's possible to showcase a detailed version of results featuring individual microbeads with their cluster number, size, intensity, and coordinates, this presentation method can become quite overwhelming, especially when dealing with larger images containing numerous objects. Consequently, the image results are summarized in a single row, emphasizing the key parameters described earlier. ```{r} result$summary ``` ## Individual usage of the filter functions The results generated by the `objectDetection` function can be quickly displayed using the `resultAnalytics` function. Therefore, let's first examine the unfiltered results available from the image. ```{r} result_proximityFilter <- resultAnalytics( img = beads, coordinates = res_objectDetection$coordinates ) result_proximityFilter$detailed ``` To increase versatility, the filter functions can be used individually, without depending on each other. The following section examines the results of applying each filter separately. We will begin with the `sizeFilter`. Once again, the output from the `objectDetection` function is used as input. ```{r} ind_sizeFilter <- sizeFilter( centers = res_objectDetection$centers, coordinates = res_objectDetection$coordinates, lowerlimit = 50, upperlimit = 150 ) changePixelColor( beads, ind_sizeFilter$coordinates, color = "darkgreen", visualize = TRUE ) text( ind_sizeFilter$centers$mx, ind_sizeFilter$centers$my, ind_sizeFilter$centers$value, col = "grey" ) ``` ```{r} result_sizeFilter <- resultAnalytics( img = beads, coordinates = ind_sizeFilter$coordinates, unfiltered = res_objectDetection$coordinates ) result_sizeFilter$detailed ``` As demonstrated in the previous section, the `sizeFilter` function successfully removes multiplets and doublets. The resulting output can then be used directly in the `resultAnalytics` function to extract the most important information. The following section will present the individual use of the `proximityFilter`. The input remains the same as before. ```{r} ind_proximityFilter <- proximityFilter( centers = res_objectDetection$centers, coordinates = res_objectDetection$coordinates, radius = "auto" ) changePixelColor( beads, ind_proximityFilter$coordinates, color = "darkgreen", visualize = TRUE ) text( ind_proximityFilter$centers$mx, ind_proximityFilter$centers$my, ind_proximityFilter$centers$value, col = "grey" ) ``` As expected, the `proximityFilter` excluded microbeads that were close to each other. In this situation, a doublet is in proximity to a single microbead, so both the doublet and its neighboring microbead are rejected by the filter. ```{r} result_proximityFilter <- resultAnalytics( img = beads, coordinates = ind_proximityFilter$coordinates, unfiltered = res_objectDetection$coordinates ) result_proximityFilter$detailed ``` # Case study I: Microbeads in droplets - Dealing with discontinous edges To further illustrate the package's capabilities, the following section presents a case study mainly focused on addressing discontinuous edges in image analysis. The study showcases the integration of crucial data from two images to determine the quantities of droplets and microbeads. Additionally, the analysis aims to investigate the frequency of events in which a single microbead joins a droplet, as opposed to situations in which multiple microbeads are present in a single droplet. The study employs an algorithm that focuses on filling gaps along discontinuous edges. This is achieved through a combination of detecting line ends and interpolating pixels. By using this comprehensive method, the study provides valuable perspectives on the distribution between droplets and microbeads in the provided image. These findings demonstrate the flexibility of the package to handle complex image analysis scenarios. The following images serve as test subjects for the upcoming study. The first image displays a brightfield view showing droplets, some of which contain microbeads. The second image on the left displays the fluorescent channel, exhibiting only the microbeads. ```{r fig.show='hold', out.width="49%"} plot(droplets) plot(droplet_beads) ``` ## Closing gaps in droplet contours In typical fashion for image analysis, this study starts by applying a threshold to the bright-field image. Subsequently, the resulting image uncovers a distinct challenge: the edges of individual partitions are not continuous. In order to differentiate individual partitions and evaluate whether they contain microbeads, it is crucial to bridge these gaps. Fortunately, the package offers a specialized function, `fillLineGaps`, to address this issue in image analysis. This algorithm identifies line endpoints and connects them to the nearest neighboring edge that is not their own. Additionally, the `objectDetection` function can eliminate specific objects, such as microbeads. This step is crucial for avoiding the unwanted outcome of line ends becoming connected to microbeads. The code chunks below, derived from the `fillLineGaps` function, visually demonstrating the elimination of microbeads and the detection of line ends by highlighting them with the `changePixelColor` function. ```{r fig.show='hold', out.width="49%"} # preprocessing: threshold, negate and mirroring thresh <- threshold(droplets, "13%") thresh_cimg <- as.cimg(thresh) thresh_magick <- cimg2magick(thresh_cimg) neg_thresh <- image_negate(thresh_magick) neg_thresh_cimg <- magick2cimg(neg_thresh) neg_thresh_m <- mirror(neg_thresh_cimg, axis = "x") # first remove microbeads from droplet image to prevent reconnecting with # labeled regions that are not lines/edges beads_to_del <- droplet_beads bead_coords <- objectDetection(beads_to_del, alpha = 1, sigma = 0.1) # transform binary image to array to modify individual values thresh_array <- as.array(neg_thresh_m) for (i in seq_len(nrow(bead_coords$coordinates))) { thresh_array[ bead_coords$coordinates[i, 1], bead_coords$coordinates[i, 2], 1, 1 ] <- 0 } # removed microbeads from droplets and retransformation to cimg thresh_clean_cimg <- as.cimg(thresh_array) # displaying problem of discontinous edges plot(thresh_cimg) # displaying removed microbeads plot(thresh_clean_cimg) ``` ```{r fig.align='center'} # same orientation for 'cimg' and 'magick-image' thresh_clean_m <- mirror(thresh_clean_cimg, axis = "x") thresh_clean_magick <- cimg2magick(thresh_clean_m) # getting coordinates of all line ends mo1_lineends <- image_morphology( thresh_clean_magick, "HitAndMiss", "LineEnds" ) # transform extracted coordinates into data frame lineends_cimg <- magick2cimg(mo1_lineends) end_points <- which(lineends_cimg == TRUE, arr.ind = TRUE) end_points_df <- as.data.frame(end_points) colnames(end_points_df) <- c("x", "y", "dim3", "dim4") # highlighted line ends vis_lineend <- changePixelColor(thresh_clean_cimg, end_points_df, color = "green", visualize = TRUE ) ``` After reviewing part of the `fillLineGaps` function, let us apply it to the example images provided. The first three parameters have already been discussed, which entails converting to a binary image using thresholding and eliminating identified objects. The `alpha` and `sigma` parameters, denoting the threshold adjustment factor and the smoothing factor, are derived from the `cannyEdge` function in the imager package. Moving on to the next parameter, the radius determines the maximum pixel range around each line end that ought to be scanned for another edge. The iterations parameter specifies the number of times the algorithm will be applied to the given image. The function incorporates an internal visualization that highlights the pixels added by the algorithm to fill the line gaps. In the following images, the visualization and the results of the function are displayed. ```{r fig.show='hold', out.width="49%"} closed_gaps <- fillLineGaps(droplets, droplet_beads, threshold = "13%", alpha = 1, sigma = 0.1, radius = 5, iterations = 3, visualize = TRUE ) closed_gaps |> plot() ``` As intended, the contours show reduced fragmentation and improved continuity, enabling further analysis to extract meaningful information from the image. While this accomplishment is remarkable, it is important to acknowledge the algorithm's limitations. The algorithm performs admirably in situations where relatively straight lines are fragmented, but challenges arise in more complicated situations. One issue arises from diagonal line endings where, for lines with a one-pixel width, each pixel is treated as a separate cluster. As a result, the direct neighbor meets the reconnection requirements. To address this problem, diagonal line endings will not reconnect with their cluster or the first direct neighbor's cluster. A different issue arises when there are multiple edges in the scan area. In these instances, the endpoint will reconnect with all of them, potentially generating small new partitions and a clotted-like structure. Despite this, the algorithm successfully manages to close gaps in the majority of cases, rendering it satisfactory for our specific example. ### Animated visualization of the `fillLineGaps` algorithm ```{r fillLineGaps, fig.align='center'} first_img <- vis_lineend second_img <- closed_gaps first_m <- mirror(first_img, axis = "x") second_m <- mirror(second_img, axis = "x") first_magick <- cimg2magick(first_m) second_magick <- cimg2magick(second_m) img <- c(first_magick, second_magick) image_animate(image_scale(img, "500x635"), fps = 1, dispose = "previous") ``` Due to size limitations for packages on CRAN, a separate vignette will be published soon to cover the remaining new functions and further information about the analysis of microbeads in droplets. This vignette will include detailed information on: - `imgPipe`, pipeline for object detection and filtering, - `scanDir`, utilizing the pipeline for whole directory analysis, - `haralickCluster`, extracts Haralick features and clusters information using Partitioning Around Medoids, - `shapeFeatures`, capable of extracting shape-related information from detected objects and grouping them using Self-Organizing Maps. ```{r echo=FALSE} options(original_options) ``` \pagebreak ```{r} sessionInfo() ```