Three-dimensional Imaging and Simulation in Breast Augmentation




This article discusses perception of three-dimensional objects and binocular vision. High-resolution three-dimensional images of the breast can be captured using a camera system consisting of 3 separate stereoscopic pairs of digital cameras. The images (surfaces) are then joined to form a 220° surface of the torso, including the breasts. The images can be rotated freely in space. Simulation of augmentation with or without mastopexy is presented. Three-dimensional imaging and computer simulation of breast augmentation has become an emerging technology in many breast augmentation practices. This technology can be integrated in different ways into the consultation and informed consent process.


Key points








  • Computerized capture of three-dimensional objects with the ability to create three-dimensional representations (surfaces) is now possible.



  • These surfaces can be rotated freely in space



  • This technology has been applied to the capture of the female breast.



  • Calculations of volume and measurement parameters can be obtained with accuracy.



  • Simulations of breast augmentation are possible by treating the breast model as an elastic solid and inserting an implant of known dimensions and volume under the breast.



  • Software programmed algorithms are implemented to deform the breast by the implant while maintaining breast tissue volume constant, thereby producing a simulation of the augmented breast.



  • This technology has been applied to the consultation process of the breast augmentation procedure, improving patient understanding and facilitating communication between patient and surgeon.






Introduction


Because the focus of this article is on the current state of the art in three-dimensional (3D) computer simulations in breast augmentation, the historical aspects of the development of this technology are not addressed here. A basic explanation of the method by which 3D images are obtained and processed is presented, followed by a discussion of how this technology is used in the authors’ clinical practices. This technology is continually evolving, with major advances not only in the quality and accuracy of image capture but also in the postproduction processing and simulation of potential surgical results.


Classic examples of two-dimensional images are drawings and photographs. Depth perception is the ability to perceive the environment in 3 dimensions, as well as to naturally understand the relative distance of objects (ie, what is in the foreground vs what is in the background). Two-dimensional images by definition lack true depth and require the intervention of the brain to provide depth by means of a database of monocular visual cues that are created through visual learning over time. Visual cues can be either binocular (based on slightly differing images present by each eye) or monocular (based on cues for which only 1 eye is needed, such as perspective, object size, texture, and grain). Perceptions of depth and 3 dimensions are based on both monocular and binocular cues. In contrast, stereoscopy is the creation or enhancement of the illusion of depth in an otherwise flat, two-dimensional image by creating binocular vision, and is the basis of the technology behind 3D imaging in breast augmentation.


Stereopsis is the perception of 3D structure and depth based on visual information presented to the brain from 2 eyes (binocular vision). Humans, like most other creatures, have 2 eyes that are at the same vertical level on the head, but are at different positions horizontally. When focused on a subject, the images projected on the retinas of each of the two eyes vary very slightly. This horizontal (binocular) disparity is caused by the horizontal separation of the two eyes, also known as parallax. This information is then transmitted to the visual cortex, where the resultant perception is of 3D structure with depth.


A stereoscopic camera has 2 lenses and 2 sensors or film frames positioned at the same vertical level but separated (usually horizontally), producing 2 slightly dissimilar images that, when viewed with a stereoscope (each eye sees only 1 of the 2 different images), the brain perceives as a 3D image. The brain is able to receive these 2 dissimilar images and convert them into a perception of a 3D structure.


Take a photograph and scan it into a computer and look at that photograph on a flat computer screen. Both images are two-dimensional. Any perception of three dimensionality is caused by monocular cues seen on either the original photograph or its scanned-in version. If a photograph of a subject is taken with a stereoscopic digital camera, there are 2 slightly different photographic files and a computer to process them. It now becomes possible to create stereopsis: the perception of 3D realism based on binocular vision as obtained from 2 different digital cameras. This ability is the basis of the current, state-of-the-art 3D imaging.




Introduction


Because the focus of this article is on the current state of the art in three-dimensional (3D) computer simulations in breast augmentation, the historical aspects of the development of this technology are not addressed here. A basic explanation of the method by which 3D images are obtained and processed is presented, followed by a discussion of how this technology is used in the authors’ clinical practices. This technology is continually evolving, with major advances not only in the quality and accuracy of image capture but also in the postproduction processing and simulation of potential surgical results.


Classic examples of two-dimensional images are drawings and photographs. Depth perception is the ability to perceive the environment in 3 dimensions, as well as to naturally understand the relative distance of objects (ie, what is in the foreground vs what is in the background). Two-dimensional images by definition lack true depth and require the intervention of the brain to provide depth by means of a database of monocular visual cues that are created through visual learning over time. Visual cues can be either binocular (based on slightly differing images present by each eye) or monocular (based on cues for which only 1 eye is needed, such as perspective, object size, texture, and grain). Perceptions of depth and 3 dimensions are based on both monocular and binocular cues. In contrast, stereoscopy is the creation or enhancement of the illusion of depth in an otherwise flat, two-dimensional image by creating binocular vision, and is the basis of the technology behind 3D imaging in breast augmentation.


Stereopsis is the perception of 3D structure and depth based on visual information presented to the brain from 2 eyes (binocular vision). Humans, like most other creatures, have 2 eyes that are at the same vertical level on the head, but are at different positions horizontally. When focused on a subject, the images projected on the retinas of each of the two eyes vary very slightly. This horizontal (binocular) disparity is caused by the horizontal separation of the two eyes, also known as parallax. This information is then transmitted to the visual cortex, where the resultant perception is of 3D structure with depth.


A stereoscopic camera has 2 lenses and 2 sensors or film frames positioned at the same vertical level but separated (usually horizontally), producing 2 slightly dissimilar images that, when viewed with a stereoscope (each eye sees only 1 of the 2 different images), the brain perceives as a 3D image. The brain is able to receive these 2 dissimilar images and convert them into a perception of a 3D structure.


Take a photograph and scan it into a computer and look at that photograph on a flat computer screen. Both images are two-dimensional. Any perception of three dimensionality is caused by monocular cues seen on either the original photograph or its scanned-in version. If a photograph of a subject is taken with a stereoscopic digital camera, there are 2 slightly different photographic files and a computer to process them. It now becomes possible to create stereopsis: the perception of 3D realism based on binocular vision as obtained from 2 different digital cameras. This ability is the basis of the current, state-of-the-art 3D imaging.




Computerized stereoscopic imaging


When processing binocular images, the brain is trying to reconcile identical visual content shown from 2 slightly different perspectives. A computer does this as well, but in a slightly different way. The surface of an image sensor is divided into a matrix of tiny units called pixels. Pixels are analogous to the rods and cones in the retina. They see color and intensity. In order to reconcile slightly dissimilar images on 2 sensors, pixels that are recording light information on exactly the same tiny portion of the subject need to be matched. To do this, the computer needs an algorithm. For instance, select one pixel from camera A and find the corresponding pixel on camera B, which is recording the same part of the subject. This selection can be accomplished by looking at groups of pixels and concentrating on either defining features or a specific area on the subject. Skin does not have many strong defining features, although skin contains texture, including pores, fine lines, and pigmentation differences, thus providing a pattern to match one camera image against another. An area on one camera’s sensor is selected and the computer tries to find a corresponding area on the other camera’s sensor by pattern recognition ( Fig. 1 ).




Fig. 1


Two slightly differing images of a woman’s lips, with defining features (rhytides) within the lips that make it possible to reconcile the 2 images by pattern recognition. When this process is complete, each pixel on one camera sensor can be made to correspond with another pixel on the other camera’s sensor.




Camera calibration


The only way to determine where in space a subject lies is to have a method of calibration such that the location of the 2 cameras relative to each other in space is known. One commonly used method is the Tsai algorithm, originally described in 1987. A white card with a series of dots of known distance apart is used, and an L is used as a calibration standard. An image of this card is taken and subsequently the cameras can be calibrated ( Fig. 2 ).




Fig. 2


Camera calibration card.


Following the Tsai calibration, which calibrates each camera separately, there is a global optimization to refine the accuracy. Photogrammetrists call this bundle adjustment: see Ref. for an example. Once the cameras are calibrated, when an image is taken of a subject, any corresponding point on the subject can be taken and, based on where that point appears on each image sensor, the location of that point in space can be determined. This process is repeated until the entire image is processed and all pixels on the 2 sensors that share a corresponding point on the subject are accounted for. For any image, the clinician can now associated an individual ray in space for each pixel. Where the 2 rays for a given corresponding pair of related pixels intersect in space is a point on the surface of a given subject ( Fig. 3 ).




Fig. 3


Two representative points on a woman’s lips are shown, along with where they appear on each of the 2 image sensors. Because the relative positions of the cameras are known, the positions of the 2 points on the upper lip in space are determined.


Each image sensor captures a portion of the subject not visualized on the other sensor. These areas that lie in the periphery of the stereoscopic view of the camera are discarded.




The three-dimensional surface


When all the patches of the subject are assembled, a 3D surface is created. This surface can be rotated in space or manipulated as desired, and is the beginning of a 3D representation of a subject. There are limitations in the quantity of the surface of the subject that can be captured in this manner. A stereoscopic camera set up in this manner cannot capture enough usable surface to permit computerized simulation. To rectify this problem, multiple stereoscopic cameras can be set up and all calibrated together. For instance, if there are 3 pairs of stereoscopic cameras set up to the right, left, and front of the subject; 3 separate, overlapping surfaces can be created. Because all 3 pairs of cameras are calibrated together at the same time, every point in space of each of the 3 surfaces is known. By viewing all 3 surfaces together, the surfaces overlap to produce a seamless, larger 3D surface encompassing approximately 220° around the subject. For every point in space, only 1 pair of cameras is responsible for identifying where a given patch is by deciding which set of cameras is recording the greatest amount of pixels. This process of joining the surfaces together is called stitching ( Fig. 4 ).




Fig. 4


Three separate surfaces are stitched together to produce a 220° surface.


Photogrammetry is the process of obtaining measurements from photographs and determining the exact position of a subject’s surface points. Photogrammetry is complex and beyond the scope of this article, but it involves computational geometry, algebra, trigonometry, numerical analysis, calculus, partial differential equations, and statistics!




Creating a usable three-dimensional surface


At this point, a 3D surface of a woman’s anterior chest wall and breasts can be created. A wireframe model is created by tiling the surface with very small triangles. Color is then projected onto the surface to represent the skin surface.




Computer simulation


In order to perform simulations, the computer algorithms have to be able to interpret the contours of a 3D surface. This interpretation is accomplished through the process of land marking. Earlier versions of software required manual placement of landmarks; however, the process is now automated and fairly accurate. The ability to manually adjust the landmarks improves precision with simulation. The surface may also be cropped as desired. Cropping does not affect the simulations; it only serves to clean the images up from an aesthetic point of view.


The next step is to consider each breast in isolation of the chest wall. A finite element model is used to treat the isolated breast as an elastic solid. Much like the way the surface of a subject is tiled with adjoining triangles, the interior of the breast is filled with adjoining tetrahedrons. A tetrahedron is a polyhedron composed of 4 triangular faces, or sides, and 4 vertices, or corners, where the 3 faces meet. Knowing the volume and quantity of the tetrahedrons permits calculations of volume. The accuracy of these calculations depends on accurate placement of the landmarks that are used to determine the locations of the borders of the breast with respect to the chest wall. After tetrahedralization is completed, the surface of the breast is tiled with triangles and the skin is painted (ie, added) on top ( Fig. 5 ).




Fig. 5


Isolation of the breast from the chest wall.




Simulation of breast augmentation


Once the surface of the breast is defined and its volume is known, simulation of breast augmentation with placement of an implant can be accomplished. A library of different implant shapes (round vs anatomic) can be created with the exact dimensions and volumes of breast implants from several manufacturers. Just as in surgery, in which the breast is detached from the chest wall, this can also be done using computer modeling of the breast. The breast can be treated as an elastic solid and deformed by an underlying implant based on the way the computerized algorithm is constructed. These parameters can be altered to simulate the stiffness of the implant filler. The implant volume is defined, and its shape and dimensions are known and can therefore be represented by tetrahedrons ( Fig. 6 ), just as the breasts are. The underlying implant can now deform the overlying breast to simulate augmentation. Because the volume of the breast is known and does not change (not accounting for postsurgical atrophy), the overlying breast can change shape with its volume held constant by the software, thus providing greater accuracy to the simulation process ( Fig. 7 ). The final contours of the surface of the torso can be produced, initially as a gray surface ( Fig. 8 ). A gray surface is devoid of the distraction of skin texture and color, thus better revealing the surface contours. The skin can then be painted onto the gray surface (see Fig. 8 ).


Nov 20, 2017 | Posted by in General Surgery | Comments Off on Three-dimensional Imaging and Simulation in Breast Augmentation

Full access? Get Clinical Tree

Get Clinical Tree app for offline access