micromachines Article
Large-Field-of-View Visualization Utilizing Multiple Miniaturized Cameras for Laparoscopic Surgery Jae-Jun Kim 1 ID , Alex Watras 1 ID , Hewei Liu 1 , Zhanpeng Zeng 1 , Jacob A. Greenberg 2 , Charles P. Heise 2 , Yu Hen Hu 1 and Hongrui Jiang 1, * 1
2
*
Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, WI 53706, USA;
[email protected] (J.-J.K.);
[email protected] (A.W.);
[email protected] (H.L.);
[email protected] (Z.Z.);
[email protected] (Y.H.H.) Department of Surgery, University of Wisconsin School of Medicine and Public Health, Madison, WI 53792, USA;
[email protected] (J.A.G.);
[email protected] (C.P.H.) Correspondence:
[email protected]; Tel.: +1-608-265-9418
Received: 7 July 2018; Accepted: 22 August 2018; Published: 25 August 2018
Abstract: The quality and the extent of intra-abdominal visualization are critical to a laparoscopic procedure. Currently, a single laparoscope is inserted into one of the laparoscopic ports to provide intra-abdominal visualization. The extent of this field of view (FoV) is rather restricted and may limit efficiency and the range of operations. Here we report a trocar-camera assembly (TCA) that promises a large FoV, and improved efficiency and range of operations. A video stitching program processes video data from multiple miniature cameras and combines these videos in real-time. This stitched video is then displayed on an operating monitor with a much larger FoV than that of a single camera. In addition, we successfully performed a standard and a modified bean drop task, without any distortion, in a simulator box by using the TCA and taking advantage of its FoV which is larger than that of the current laparoscopic cameras. We successfully demonstrated its improved efficiency and range of operations. The TCA frees up a surgical port and potentially eliminates the need of physical maneuvering of the laparoscopic camera, operated by an assistant. Keywords: large field of view; miniaturized cameras; laparoscopy; bean drop task; surgical skills; video stitching
1. Introduction Laparoscopic surgery has gained tremendous popularity over the past decades due to numerous clinical benefits for patients including decreased postoperative pain, decreased wound morbidity, earlier recovery and return to normal activities, and improved cosmesis [1–5]. These clinical benefits are the results of limiting incision size and thus surgical trauma. However, the tradeoffs are the needs for an excellent intracavitary visualization along with a surgeon skill set capable of performing these—often more technically challenging—procedures. Therefore, the ability for carrying out a successful laparoscopic procedure is strongly dependent on the quality and the extent of the intra-abdominal visualization, which is critical for the identification of vital structures, manipulation of tissue, and the surgical performance and for the safe completion of each operation. In the current system, a single laparoscope is inserted into one of the laparoscopic ports to provide this intra-abdominal visualization. However, this paradigm has many well-known drawbacks, in terms of visualization and efficiency of operation. One drawback in terms of visualization is the limited visual field of a single laparoscope, which severely limits the ability of a surgeon to view the operative field and perceive the surroundings. This increases the possibility of surgical accidents. A panoramic view was suggested to overcome Micromachines 2018, 9, 431; doi:10.3390/mi9090431
www.mdpi.com/journal/micromachines
Micromachines 2018, 9, 431
2 of 13
this limitation of the current laparoscope [6–8]. Naya et al. showed that panoramic views during laparoscopic surgery shorten the operating time and increase safety [7]. In addition, several attempts have been made to achieve large field-of-view (FoV) imaging systems for biomedical and surgical applications, such as the usage of prisms [9,10], panomorph lenses [11], or mirror attachments [12,13]. These methods, however, suffer from a multitude of issues ranging from aberrations to blind zones. Additionally, they have a common underlying requirement of proximity to the surgical field, which makes them prone to splatter and interference or occlusion from the surgical instruments. The most recent development of imaging devices could potentially improve the FoV and performances of the surgical vision systems. For example, it has been proven that attaching specially designed compound lenses to the front end of an endoscope results in a large FoV and zoom capability [14,15]. Optical beam scanning methods are also promising to increase the FoV of the endoscopic imaging systems [16,17]. Kagawa et al. showed the possibility to increase the FoV of endoscopes by using a compact compound-eye camera [18]. Although these approaches offer advantages in terms of image resolution and aberrations, complicated optical and electronic-mechanical designs of the components lessen their durability and stability. More importantly, they still require coordinated operations of the imaging systems and the surgical tools, through multiple operative ports. Multi-view vision systems have also been developed to improve the quality of surgical visualization. Tamadazte et al. increased the FoV by combining two miniaturized camera with a conventional endoscope [19]. This more developed visual system shortened the procedure time and reduced the number of commands used to operate the robot endoscope holder. Multiple-cameras were used to provide multiple points of view and maintain a three-dimensional perception [20,21]. The proposed multi-view visual systems, however, can only increase the point of view in one direction due to the limitation in camera placement. These can only spread out multiple images and are not able to combine them into a single large one, which degrades the surgical performance, as a result. Another drawback in terms of efficiency of operations, is that the current laparoscopes must occupy one of the few available surgical ports, exclusively for viewing, and thus prevents further port use for other instruments. As a result, there is always a need to increase the number of ports by one count. Most of the previous visualization systems that have been able to improve visual quality also need to occupy one of the surgical ports. In order to achieve large FoV and improve the efficiency of operations, we developed a trocar-camera assembly (TCA) and real-time video stitching algorithm. The TCA uses commercial off-the-shelf cameras and a simple and stable mechanical system (see Figure 1a,b). The miniaturized cameras are easily deployed and retrieved using the mechanical system of the TCA. Such convenient deployment and retrieval of the miniature cameras offered by this trocar design is an advantage over our previous attempts to apply multiple cameras at laparoscopic ports [22–24]. The TCA can provide a larger FoV compared to a current laparoscopic imaging system that uses a single camera, with the use of multiple miniaturized cameras. (see Figure 1c,d). Unlike the current laparoscopic system, surgical instruments can now be inserted through any surgical port, during an operation, as the port is not exclusively occupied by the laparoscopic camera anymore. The algorithm is able to stitch the videos from multiple cameras without human-detectable latency, and provides smooth, high-resolution videos with a significantly improved FoV. The FoV of the TCA was measured and compared with commercial laparoscopic cameras. To demonstrate the advantages of a large FoV visualization, we implemented a standard bean drop task and a modified one that requires complicated operations in all directions.
Micromachines 2018, 9, 431 Micromachines 2018, 9, x
3 of 13 3 of 13
Figure 1. 1. Design Design and and concept concept of of aa trocar-camera trocar-camera assembly assembly (TCA). (TCA). The The TCA TCA consists consists of of aa push push button, button, Figure foldable camera camera supports, supports, aa surgical surgical port, port, and and miniaturized miniaturized cameras. cameras. (a) (a) The The miniaturized miniaturized cameras cameras foldable are stacked within the surgical port when the port is inserted into or extracted from a simulator are stacked within the surgical port when the port is inserted into or extracted from a simulator box. box. (b)Miniaturized Miniaturizedcameras cameras deployed the surgical is inserted the simulator box. (b) areare deployed afterafter the surgical port isport inserted into theinto simulator box. Surgical Surgical instruments can be inserted the port because port isbynot by the instruments can be inserted through thethrough port because the port is notthe occupied theoccupied cameras during cameras during operation. (c) Small field-of-view (FoV) of the current laparoscopic (d) Large operation. (c) Small field-of-view (FoV) of the current laparoscopic camera. (d) Largecamera. FoV provided by FoVmultiple provided by the multiple cameras. the cameras.
2. Materials Materials and and Methods Methods 2. 2.1. Trocar-Camera Trocar-Camera Assembly Assembly (TCA) (TCA) 2.1. The TCA TCAconsisted consisted a push camera supports, a surgical port, and The of aofpush button,button, foldablefoldable camera supports, a surgical port, and miniaturized miniaturized multiple Wethe designed the TCA structure. of the mechanical parts the multiple cameras. Wecameras. designed TCA structure. Most of Most the mechanical parts of the of TCA TCA were manufactured by 3D printing and computer numerical control (CNC) machining were manufactured by 3D printing and computer numerical control (CNC) machining (Xometry, (Xometry, Gaithersburg, and the torsion springs made bySteel Michigan Steel Spring Gaithersburg, MD, USA),MD, and USA), the torsion springs were madewere by Michigan Spring Company Company MI, USA), according our design. Five Raspberry Pi cameras with flexible (Detroit, MI,(Detroit, USA), according to our design. to Five Raspberry Pi cameras with flexible cables (Raspberry cables Pi MINI camera module (FD5647-500WPX-V2.0), RoarKit) integrated with Pi MINI(Raspberry camera module (FD5647-500WPX-V2.0), RoarKit) were integrated withwere four foldable camera four foldable camera supports. An extra camera was onto of the four arms to provide supports. An extra camera was integrated onto one of integrated the four arms to one provide a central, main view of a central, main the surgical field (see Results section). the surgical fieldview (seeof Results section).
2.1.1. 2.1.1. Push Button The The push push button button consisted consisted of of aa top top lid, lid, position-limiting position-limiting rods, rods, pliant pliant plates plates and and torsion torsion springs springs (see (see Figure Figure 2a,b). 2a,b). Four Four position-limiting position-limiting rods were were sintered sintered onto onto the the top top lid. lid. The The rods rods were were inserted inserted and and could could slide slide into into the the corresponding corresponding channels channels in the the surgical surgical port, port, keeping keeping the push push button button in position Sequential deployment and retraction of positionduring duringthe thedeployment deploymentand andthe theretraction retractionprocesses. processes. Sequential deployment and retraction the cameras werewere achieved by four pliantofplates andplates torsion springs of different of the cameras achieved bypairs fourofpairs pliant and torsion springs dimensions, of different dimensions, which were also sintered on the top lid. When the push button was pushed down, a longer pair of pliant plates pushed the camera earlier than the shorter plates did. When the first
Micromachines 2018, 9, 431
4 of 13
which were2018, also9, sintered Micromachines x
on the top lid. When the push button was pushed down, a longer pair of 4 of 13 pliant plates pushed the camera earlier than the shorter plates did. When the first camera unit was in position by the clip on the surgical port), plate pair the waspliant separated thewas fork camera unit(stopped was in position (stopped by the clip onthe thepliant surgical port), plateby pair structure of and bypassed andthe its camera torsion spring gotand bentitswhile the spring shorter separated bythe theclip fork structure of the the camera clip andsupport, bypassed support, torsion sheets continuously pushed the other cameras. In pushed this manner, the cameras could deployed of got bent while the shorter sheets continuously the other cameras. In be this manner,out the the port could one bybeone. Retraction cameras performed in aof reversed procedure: shorter spring cameras deployed out of the port one was by one. Retraction the cameras was performed in a pulled the camera back earlier thanpulled the longer springsback did. earlier than the longer springs did. reversed procedure: shorter spring the camera
Figure Figure2.2.Mechanical Mechanicalsystem systemof ofaaTCA. TCA.(a) (a)The Theworking workingmechanism mechanismof ofthe thepush pushbutton. button.Pliant Pliantplates plates and torsion springs push down aluminum arms and deploy multiple cameras sequentially. and torsion springs push down aluminum arms and deploy multiple cameras sequentially.The Thefour four fork-shape fork-shapeclips clipsand andthe theend endstopper stopperofofthe thearms armsare areutilized utilizedto tostop stopand andhold holdthe thearms. arms.The Thefork forkon on the theclip clipcan canseparate separatethe thepliant pliantplate platepairs pairsto tobypass bypassthe thestopper stopperof ofthe thearms. arms.(b) (b)Cross-section Cross-sectionof ofthe the push andtop topview view of the surgical A foldable consists of two push button button and of the surgical port.port. (c) A (c) foldable cameracamera support support consists of two aluminum aluminum arms, torsion springs, and a camera holder. arms, torsion springs, and a camera holder.
2.1.2. Surgical Port 2.1.2. Surgical Port The surgical port of the multi-camera laparoscope had a 150-mm long plastic tube with a 26 mm The surgical port of the multi-camera laparoscope had a 150-mm long plastic tube with a 26 mm outer diameter. Our design allowed for embedded camera supports, so the surgical port contained outer diameter. Our design allowed for embedded camera supports, so the surgical port contained four trenches on the inner surface of the arms and four channels inside the sidewall of the port (see four trenches on the inner surface of the arms and four channels inside the sidewall of the port Figure 2b). On the upper end of the port, four fork-shape clips were utilized to stop and hold the (see Figure 2b). On the upper end of the port, four fork-shape clips were utilized to stop and hold the arms, and the fork on the clip could separate the pliant plate pairs to bypass the stopper of the arms. arms, and the fork on the clip could separate the pliant plate pairs to bypass the stopper of the arms. 2.1.3. 2.1.3.Foldable FoldableCamera CameraSupport Support The Thefoldable foldablecamera camerasupport supportconsisted consistedof oftwo twoarms, arms,aacamera cameraholder, holder,and andright-angled right-angledtorsion torsion springs (see Figure 2c). Each support could deploy one camera unit, and in this design example, four springs (see Figure 2c). Each support could deploy one camera unit, and in this design example, supports were were utilized. Right-angled torsion springs connected thethe components with four supports utilized. Right-angled torsion springs connected components witheach eachother. other. When it was retracted inside the port, the channel forced the springs straight down so that cameras When it was retracted inside the port, the channel forced the springs straight down so that cameras could couldbe bestacked stackedup upin inthe thetube. tube.Once Oncethe thecameras camerasand andshort shortarms armswere werepushed pushedout outof ofthe thesurgical surgical port, torsion springs restored the right angle and unfolded the cameras as shown in Figure 1. port, torsion springs restored the right angle and unfolded the cameras as shown in Figure 1. Each arm had hadan anend endstopper stopper that could locked by clip the on clipthe onsurgical the surgical port.spring The Each long long arm that could be be locked by the port. The spring of the stopper could be hooked one of the torsion in thebutton. push anchoranchor on topon of top the stopper could be hooked on oneon end of end the torsion spring spring in the push button. The rear surface (the surface touching the port) of the other end had a groove for mounting the right-angled spring. 2.2. Demonstration Setup
Micromachines 2018, 9, 431
5 of 13
The rear surface (the surface touching the port) of the other end had a groove for mounting the right-angled spring. Micromachines 2018, 9, xSetup 2.2. Demonstration
5 of 13
The demonstration setup setup consisted consisted of of aa TCA, TCA, aa simulator simulator box, box, an an operating operating monitor, monitor, video The demonstration video data data transfer boards, and a computer running the video stitching program (see Figure 3). The transfer boards, and a computer running the video stitching program (see Figure 3). The TCA TCA was was inserted through aa simulator box and and five of the inserted through simulator box five cameras cameras were were deployed deployed by by the the mechanical mechanical system system of the TCA. Five Raspberry Pi 3 model B boards (Adafruit, New York, NY, USA) were used for video data TCA. Five Raspberry Pi 3 model B boards (Adafruit, New York, NY, USA) were used for video data transfer. Flex cables cables (Flex (Flex Cable Cable for for Raspberry Pi Camera transfer. Flex Raspberry Pi Camera or or Display—300 Display—300 mm, mm, Adafruit) Adafruit) and and ZIF ZIF connectors (15 pin to 15 pin ZIF 1.0 mm pitch FFC cable extension connector, Rzconne) were used connectors (15 pin to 15 pin ZIF 1.0 mm pitch FFC cable extension connector, Rzconne) were used to to connect miniaturized cameras andthe theRaspberry RaspberryPiPiboards. boards.The Thecomputer computer and and Raspberry Raspberry Pi connect thethe miniaturized cameras and Pi boards cables and an Ethernet switch (JGS524, Netgear, San Jose, boards were wereconnected connectedbybyEthernet Ethernet cables and an Ethernet switch (JGS524, Netgear, San CA, Jose,USA). CA, The operating monitor displayed real-time stitched video after computer completed the video USA). The operating monitor displayed real-time stitched videothe after the computer completed the stitching process. video stitching process.
Figure 3. 3. Demonstration simulator box. Figure Demonstration setup. setup. The The TCA TCA is is inserted inserted into into the the simulator box. Video Video data data from from five five cameras are transferred to a computer through video data transfer boards and then the stitched cameras are transferred to a computer through video data transfer boards and then the stitched video video is shown the operating monitor. is shown on theon operating monitor.
2.3. Image Processing for Video Stitching 2.3. Image Processing for Video Stitching Figure 4 shows the flowchart of the real-time video stitching process. First, we collected images Figure 4 shows the flowchart of the real-time video stitching process. First, we collected images from the five Raspberry Pi cameras, and the five Raspberry Pi boards streamed these videos by from the five Raspberry Pi cameras, and the five Raspberry Pi boards streamed these videos by using using an MJPG-streamer. The resolution and frame rates of each streamed video were 640 × 480 an MJPG-streamer. The resolution and frame rates of each streamed video were 640 × 480 pixels pixels and 30 frames per second (fps), respectively. The video data from the boards were transferred and 30 frames per second (fps), respectively. The video data from the boards were transferred to the to the computer running the stitching program, through Ethernet cables. The stitching program computer running the stitching program, through Ethernet cables. The stitching program combined combined five videos in real time to show a large FoV mosaic. The detailed process of stitching five videos in real time to show a large FoV mosaic. The detailed process of stitching program is program is described in the following section. The operator watched the real-time stitched video on described in the following section. The operator watched the real-time stitched video on the operating the operating monitor during the task. monitor during the task.
Micromachines 2018, 9, 431 Micromachines 2018, 9, x
6 of 13 6 of 13
Figure process. Figure 4. 4. Flowchart Flowchart of of the the real-time real-time video video stitching stitching process.
The general process of video stitching follows these steps. First, we find distinguishable The general process of video stitching follows these steps. First, we find distinguishable landmarks landmarks in a set of incoming video frames by using the speeded up robust features (SURF) in a set of incoming video frames by using the speeded up robust features (SURF) algorithm [25], which algorithm [25], which can identify recognizable regions in an image. This allows us to identify those can identify recognizable regions in an image. This allows us to identify those features in other images. features in other images. Then a feature matching algorithm must attempt to find features that show Then a feature matching algorithm must attempt to find features that show up simultaneously in images up simultaneously in images from multiple cameras. Once we have identified a sufficient number of from multiple cameras. Once we have identified a sufficient number of feature matches, we attempt to feature matches, we attempt to find a transformation that can be applied to one image such that the find a transformation that can be applied to one image such that the features detected in that image features detected in that image are moved to their corresponding coordinates, as seen from another are moved to transformation their corresponding coordinates,applied as seento from camera. This is camera. This is subsequently the another full source image so transformation that both images subsequently applied to the full source image so that both images now represent the same projection now represent the same projection from the scene to the camera. Finally, the images are blended from the to scene toathe camera. Finally, the images are blended together to form a mosaic. together form mosaic. Initial stitching stitching methods methods based based on on the the work work by by Szeliski Szeliski [26] [26] proved proved to to be be too too slow Initial slow for for real-time real-time video work, as the amount of time required to detect feature points alone introduced significant latency. video work, as the amount of time required to detect feature points alone introduced significant As a remedy, instead the task image an initialization phase andphase a streaming latency. As a we remedy, wesplit instead splitofthe task stitching of image into stitching into an initialization and a phase as proposed by Zheng [27]. streaming phase as proposed by Zheng [27]. The initialization initialization phase, phase, which which takes takes several several seconds seconds (e.g., (e.g., 5–6 5–6 s), s), served served to to allow allow us us to to perform The perform as much of our computations as possible ahead of time to remove computation while the video as much of our computations as possible ahead of time to remove computation while the video was was streaming. During During this this phase, phase, we we computed computed feature feature points points from from the scene and and used used them them to to generate generate streaming. the scene pixel We chose be our our main main view view and and then then pixel correspondences correspondences between between the the cameras. cameras. We chose aa single single camera camera to to be used the the detected detected feature feature points points to to determine determine which which transformations transformations could could be be used used to to best best align align the the used images. The chosen transformations were stored for the streaming phase. images. The chosen transformations were stored for the streaming phase. The streaming streaming phase phase then then only only required required that be applied applied to to each each The that the the chosen chosen transformations transformations be incoming video frame, as it was received, and that the images be blended together. There is a variety of incoming video frame, as it was received, and that the images be blended together. There is a variety blending algorithms aimed at hiding image seams for more realistic-seeming views. However, we chose of blending algorithms aimed at hiding image seams for more realistic-seeming views. However, we to use to theuse most methodmethod in orderintoorder ensure computation and clarity ofclarity our recorded chose thebasic mostblending basic blending tofast ensure fast computation and of our scene. With our method, we chose an ordering of the cameras, then each camera was placed in order, recorded scene. With our method, we chose an ordering of the cameras, then each camera was onto a unified canvas. In this way, no extra artifacts were introduced by the blending algorithm. placed in order, onto a unified canvas. In this way, no extra artifacts were introduced by the In order to minimize the latency and speed up the stitching process, we used a multi-thread blending algorithm. method. Eleven were in and this study. threads wereprocess, used to capture from five In order to threads minimize thecreated latency speed Five up the stitching we usedimages a multi-thread cameras.Eleven Each one of these five threads toFive transfer the data camera images and skip the method. threads were created inwas this used study. threads werefrom usedone to capture from frames that could not be processed in time, so the image being used for stitching were always up to five cameras. Each one of these five threads was used to transfer the data from one camera and skip the latest frame. This eliminated latency as much as possible. Four threads were used to compute the the frames that could not be processed in time, so the image being used for stitching were always up homographies and This warpeliminated the images. Each one of these four threads usedwere to warp from to the latest frame. latency as much as possible. Fourwas threads usedimages to compute a side-view camera to the perspective of the main-view camera, so that the main stitching process could the homographies and warp the images. Each one of these four threads was used to warp images be executed in parallel. Thistosignificantly reduced the overall time camera, required so forthat eachthe frame. One thread from a side-view camera the perspective of the main-view main stitching process could be executed in parallel. This significantly reduced the overall time required for each frame. One thread was used to transfer images between different threads and display the final
Micromachines 2018, 9, 431 Micromachines 2018, 9, x
7 of 13 7 of 13
stitched panorama. last thread was used tothreads recordand the camera the panorama stream was used to transferThe images between different display streams, the final stitched panorama. Theand last the output as video files. thread was used to record the camera streams, the panorama stream and the output as video files. 2.4. 2.4.FoV FoVMeasurement Measurement To Tocompare comparethe theobtained obtainedFoV FoVfrom fromthe theTCA TCAand andfrom fromcurrent currentlaparoscopic laparoscopiccameras cameras(5(5mm mmand and 10 cameras), we wecaptured capturedimages imagesofofgrid grid lines that 25 mm periodic 10mm mm 0° 0◦ laparoscopic laparoscopic cameras), lines that hadhad 25 mm periodic lineslines with with numbers. cameras placed 165 mm away the grid numbers. BothBoth cameras werewere placed 165 mm away fromfrom the grid lines.lines.
2.5.Standard Standardand andModified ModifiedBean BeanDrop DropTask Task 2.5. simulator box Laparoscopic Surgery (FLS)) was used implement a standard AAsimulator box(Fundamentals (Fundamentalsof of Laparoscopic Surgery (FLS)) was to used to implement a and modified bean dropbean task. drop The saucer, inverted cups, and beans used in these commercial standard and modified task. The saucer, inverted cups, and beanstasks usedwere in these tasks products (SIMULAB, Seattle, WA, USA). We used a 5USA). mm atraumatic Somerville, were commercial products (SIMULAB, Seattle, WA, We used agrasper 5 mm (Ethicon, atraumatic grasper NJ, USA) Somerville, to grasp theNJ, beans. (Ethicon, USA) to grasp the beans. standardbean beandrop droptask taskisisone oneofofthe thebasic basicskill skilltasks tasksfor forlaparoscopic laparoscopictraining. training.The Thestandard standard AAstandard bean drop task requires an operator to grasp the beans by a grasper, move the beans 15 cm fromthe the bean drop task requires an operator to grasp the beans by a grasper, move the beans 15 cm from saucertotothe theinverted invertedcup cupand anddrop dropthem theminto intothe thehole holeof ofthe theinverted invertedcup cup(see (seeResults Resultssection). section). saucer Wealso also developed developed a modified of of one saucer with beans andand four We modified bean beandrop droptask taskwhich whichconsisted consisted one saucer with beans inverted cups placed in four different positions (see Results section). The goal of this task was for an four inverted cups placed in four different positions (see Results section). The goal of this task was operator to grasp and move each each bean bean 10.5 cm the saucer to the cups,cups, and then drop for an operator to grasp and move 10.5from cm from the saucer toinverted the inverted and then themthem into into the holes of the cups. Therefore, thisthis modified bean drop task cancan evaluate the drop the holes of inverted the inverted cups. Therefore, modified bean drop task evaluate operating skill in all directions. the operating skill in all directions. Results 3.3.Results The four four aluminum five cameras werewere easily, and sequentially, deployed and retrieved The aluminum arms armsand and five cameras easily, and sequentially, deployed and by the mechanical system ofsystem the TCA within as shownasinshown Figurein5.Figure Thus, the inner retrieved by the mechanical of the TCAseveral withinseconds, several seconds, 5. Thus, side of the port not occupied by the cameras theduring operation and couldand still could be utilized by the inner side of was the port was not occupied by theduring cameras the operation still be the instruments. Therefore, our system could reduce the number of ports or free up a surgical port, utilized by the instruments. Therefore, our system could reduce the number of ports or free up a which would be a prominent Moreover, thisMoreover, stacked camera arrangement could efficiently surgical port, which would beadvantage. a prominent advantage. this stacked camera arrangement use the small inner space of the laparoscopic port and maximize the camera size. could efficiently use the small inner space of the laparoscopic port and maximize the camera size.
Figure Figure5.5.Deployment Deploymentand andretraction retractionofofthe themultiple multiplecameras. cameras.(a–c) (a–c)Mechanical Mechanicalsystem systemofofthe thepush push button. Stacked cameras can be deployed and retrieved in a single action within several seconds. (d) button. Stacked cameras can be deployed and retrieved in a single action within several seconds. Camera positions afterafter deployment. (e–h) TheThe mechanical system of of thethe foldable camera (d) Camera positions deployment. (e–h) mechanical system foldable camerasupports. supports. Four foldable camera supports and five cameras are sequentially deployed and retrieved. inner Four foldable camera supports and five cameras are sequentially deployed and retrieved. TheThe inner side side of the port is not occupied by the cameras during operation and allows insertion of surgical of the port is not occupied by the cameras during operation and allows insertion of surgical instruments. instruments.
Micromachines 2018, 9, 431
8 of 13
Micromachines 2018, 9, x
8 of 13
Our Our video video stitching stitching program program used used the the video video data data from from five five individual individual cameras cameras and and combined combined these five videos in real time to show a large FoV mosaic. A projective transformation was calculated these five videos in real time to show a large FoV mosaic. A projective transformation was calculated using points for mapping each image from different cameras cameras into a single using matched matchedfeature feature points for mapping each image from different intocoordinate a single system. One of the five cameras placed at the center part of the camera array was considered be coordinate system. One of the five cameras placed at the center part of the camera array towas the main view and then the required transformations were computed to map the images from five considered to be the main view and then the required transformations were computed to map the different cameras the coordinate system the main system view. As remained stationary, images from five onto different cameras onto the of coordinate ofour thecameras main view. As our cameras relative to each other and the scene plane, during the video streaming we simply applied eachwe of remained stationary, relative to each other and the scene plane, during the video streaming the calculated transformations and merged the images together. Our stitching algorithm was able to simply applied each of the calculated transformations and merged the images together. Our provide videos at approximately 26 fps, comparable to the 30 fps offered by the individual cameras. stitching algorithm was able to provide videos at approximately 26 fps, comparable to the 30 fps The totalby latency of the stitching process was approximately 200 ms and surgeons that they 200 did offered the individual cameras. The total latency of the stitching process was noted approximately not notice lag from theirthat actions thenot display S1). actions to the display (see Video S1). ms and surgeons noted theytodid notice(see lagVideo from their We compared the FoV of commercial laparoscopic cameras that of We compared the FoV of commercial laparoscopic cameras with with that of the the TCA TCA by by using using grid grid ◦ laparoscopic camera, a 5 mm lines (see Figure 6). Figure 6a–c shows captured images from a 10 mm 0 lines (see Figure 6). Figure 6a–c shows captured images from a 10 mm 0° laparoscopic camera, a 5 ◦ laparoscopic camera, and the TCA, respectively. As shown in Figure 6, the TCA covers a larger area 0mm 0° laparoscopic camera, and the TCA, respectively. As shown in Figure 6, the TCA covers a and detects moredetects grid lines than laparoscopic cameras. Thecameras. 10 mm and mm 0◦ and laparoscopic larger area and more gridcurrent lines than current laparoscopic The510 mm 5 mm 0° cameras can detect up to number andto4 grid lines,3 respectively. with current laparoscopic laparoscopic cameras can detect3 up number and 4 grid Compared lines, respectively. Compared with cameras, the TCA can detect up to number 5 grid lines without image distortion. Based on this current laparoscopic cameras, the TCA can detect up to number 5 grid lines without image ◦ laparoscopic cameras, and the TCA is 49◦ , 62◦ , and 74◦ , measurement, the FoV of the 10 mm, 5 mm 0 distortion. Based on this measurement, the FoV of the 10 mm, 5 mm 0° laparoscopic cameras, and respectively. The FoV therespectively. TCA increased 51% 19%,increased comparedbywith mm andcompared 5 mm 0◦ the TCA is 49°, 62°, andof74°, Theby FoV of and the TCA 51%10 and 19%, laparoscopic cameras, We cameras, also can compare visible usingvisible the number of with 10 mm and 5 mmrespectively. 0° laparoscopic respectively. Weimaging also canarea compare imaging ◦ visible squares. The number of visible squares the 10 mm, 5 mm 0squares laparoscopic cameras, area using the number of visible squares. Theofnumber of visible of the 10 mm, 5 and mmthe 0° TCA was 36, 56, and 84, and respectively. visible area of the increasedThe by 133% and 50%, laparoscopic cameras, the TCAThe was 36, 56, and 84, TCA respectively. visible area ofcompared the TCA ◦ laparoscopic cameras, respectively. with 10 mm 5 mm increased byand 133% and 050%, compared with 10 mm and 5 mm 0° laparoscopic cameras, respectively.
Figure 6. using grid lines printed on aon paper sheet.sheet. Captured images images from (a)from a 10 Figure 6. FoV FoVmeasurement measurement using grid lines printed a paper Captured mma 0° camera,camera, (b) a 5 mm laparoscopic camera,camera, and (c)and the (c) TCA. TCA cover (a) 10 laparoscopic mm 0◦ laparoscopic (b) a0° 5 mm 0◦ laparoscopic the The TCA. Thecan TCA can larger FoV compared with commercial laparoscopic cameras. Note the distortion in the view cover larger FoV compared with commercial laparoscopic cameras. Note the distortion in the ◦ laparoscopic camera (a) ◦ laparoscopic camera (b). acquired from the the 10 10 mm mm 00° (a) and and the the 55 mm mm 00° (b).
We successfully successfully performed performed two two different different sets sets of of aa standard standard bean bean drop drop tasks tasks in in aa simulator simulator box box We by using the TCA (see Figure 7, Video S2, Video S3). One set of the standard bean drop task was by using the TCA (see Figure 7, Video S2, Video S3). One set of the standard bean drop task was performed with with an arrow mark mark guiding performed an arrow guiding the the way way from from the the saucer saucer to to the the inverted inverted cup cup (Figure (Figure 7a–c); 7a–c); the other task was performed without an arrow mark and with longer moving-distances (Figure the other task was performed without an arrow mark and with longer moving-distances (Figure 7d–f). 7d–f). In both tasks, the operator could track the whole motion of beans and visualize the beans and In both tasks, the operator could track the whole motion of beans and visualize the beans and the the inverted cup simultaneously through the TCA. Unlike the current laparoscopic system, the TCA inverted cup simultaneously through the TCA. Unlike the current laparoscopic system, the TCA does does not need the arrow mark for guiding. The bean was grasped by a grasper (Figure 7a,d) and dropped into the hole of the inverted cup (Figure 7c,f). The beans and the inverted cup are 155 mm
Micromachines 2018, 9, 431
9 of 13
not need the arrow mark for guiding. The bean was grasped by a grasper (Figure 7a,d) and dropped 9 of 13 into the hole of the inverted cup (Figure 7c,f). The beans and the inverted cup are 155 mm and 115 mm away camera respectively. The red dotted indicate the start andthe endstart positions and 115from mm the away from array, the camera array, respectively. Thecircles red dotted circles indicate and and the red arrows show the trajectories of the moving beans. During the task period, the operator end positions and the red arrows show the trajectories of the moving beans. During the task period, continuously watched thewatched real-timethe stitched video on the video operating monitor and used onlyand oneused hand the operator continuously real-time stitched on the operating monitor to perform sets the of the bean drop tasks, without any physical only one handthe to two perform twostandard sets of the standard bean drop tasks, without anycamera physicalmaneuver. camera This is a clear advantage over current laparoscopic cameras, where there is a need for camera maneuver. This is a clear advantage over current laparoscopic cameras, where thereseparate is a need for operation by anoperation assistant. by In contrast, the captured images from twoimages single cameras (Figure show separate camera an assistant. In contrast, the captured from two single7g,h) cameras that a single camera cannot cover the saucer thethe inverted (Figure 7g,h) show that a single camera cannotand cover saucercup andconcurrently. the inverted cup concurrently. Micromachines 2018, 9, x
Figure Figure7.7.Captured Capturedimages imagesfrom froma areal-time real-timestitched stitchedvideo videoduring duringa astandard standardbean beandrop droptask taskininthe the simulator box by using the TCA (a–c), and a standard bean drop task with longer moving simulator box by using the TCA (a–c), and a standard bean drop task with longer movingdistance distance and andwithout withoutananarrow arrowmark mark(d–f). (d–f).We Wesuccessfully successfullyperformed performeda astandard standardbean beandrop droptask taskbybyusing usingthe the real-time stitched video without any physical camera maneuver. The red dotted circles and arrows in real-time stitched video without any physical camera maneuver. The red dotted circles and arrows each figure show initial and and end end positions, and and the the trajectories of the moving beans grasped by aby in each figure show initial positions, trajectories of the moving beans grasped grasper. (g,h) The images from two single cameras show that a single camera cannot cover the saucer a grasper. (g,h) The images from two single cameras show that a single camera cannot cover the saucer and andthe theinverted invertedcup cupsimultaneously. simultaneously.
We also successfully performed a modified bean drop task in the simulator box by using the We also successfully performed a modified bean drop task in the simulator box by using the TCA TCA (see Figure 8, Video S4). Figure 8a–f are captured images from the real-time stitched video. The (see Figure 8, Video S4). Figure 8a–f are captured images from the real-time stitched video. The first first bean was grasped by a grasper (Figure 8b) and dropped into the hole of the inverted cup placed bean was grasped by a grasper (Figure 8b) and dropped into the hole of the inverted cup placed in the in the forward direction (Figure 8c). The other three beans were also grasped and dropped into the forward direction (Figure 8c). The other three beans were also grasped and dropped into the holes of holes of the inverted cups placed in the left (Figure 8d), backward (Figure 8e), and right (Figure 8f) the inverted cups placed in the left (Figure 8d), backward (Figure 8e), and right (Figure 8f) directions. directions. In this modified bean drop task, as in the standard one, the operator always watched the In this modified bean drop task, as in the standard one, the operator always watched the real-time real-time stitched video on the operating monitor during the task and used only one hand to stitched video on the operating monitor during the task and used only one hand to perform the task perform the task without any physical camera maneuver. In contrast, the captured images from two without any physical camera maneuver. In contrast, the captured images from two single cameras single cameras (Figure 8g,h) show that a single camera cannot cover the saucer and the inverted cups (Figure 8g,h) show that a single camera cannot cover the saucer and the inverted cups simultaneously. simultaneously.
Micromachines 2018, 9, 431
Micromachines 2018, 9, x
10 of 13
10 of 13
Figure images from a real-time stitched video during a modified beanbean dropdrop task task in thein Figure8.8.Captured Captured images from a real-time stitched video during a modified simulator box by using the TCA. (a) Setup of a modified bean drop task. (b–f) We successfully the simulator box by using the TCA. (a) Setup of a modified bean drop task. (b–f) We successfully performed performedaamodified modifiedbean beandrop droptask taskby byusing usingthe thereal-time real-timestitched stitchedvideo videowithout withoutany anyphysical physical camera maneuver. The red dotted circles and arrows in each figure show initial and end camera maneuver. The red dotted circles and arrows in each figure show initial and endpositions positions and andtrajectories trajectoriesofofthe themoving movingbeans beanspicked pickedby byaagrasper. grasper.(g,h) (g,h)The Thecaptured capturedimages imagesfrom fromtwo twosingle single cameras camerasshow showthat thataasingle singlecamera cameracannot cannotcover coverthe thesaucer saucerand andthe theinverted invertedcup cupsimultaneously. simultaneously.
4.4.Discussion Discussion We Wedeveloped developedthe theTCA TCAand andreal-time real-timevideo videostitching stitchingalgorithm algorithmand anddemonstrated demonstratedtheir theirhigh high performance using laparoscopic surgical training tasks. For a large FoV, multiple videos from performance using laparoscopic surgical training tasks. For a large FoV, multiple videos fromfive five different cameras were stitched by using our stitching algorithm and the latency of stitched videos different cameras were stitched by using our stitching algorithm and the latency of stitched videos was wasminimized minimizedby byaamulti-threading multi-threadingmethod. method.The TheTCA TCAcan canprovide providelarger largerFoV FoVthan thancommercial commercial laparoscopic cameras. We performed a standard and modified bean drop task to verify the laparoscopic cameras. We performed a standard and modified bean drop task to verify the performance performance of the TCA and video stitching algorithm. of the TCA and video stitching algorithm. The TCA can provide a large eliminate the ofneeds of maneuvering physical maneuvering of the The TCA can provide a large FoV,FoV, eliminate the needs physical of the laparoscopic laparoscopic free port. up a The surgical large FoV ofscene the operating allows the camera, and camera, free up aand surgical largeport. FoV The of the operating allows thescene operator to easily operator to easily manipulate a grasper in all directions, including a backward direction. In the manipulate a grasper in all directions, including a backward direction. In the current laparoscopic current laparoscopic systems, which are based on one single camera, the visualization is best suited systems, which are based on one single camera, the visualization is best suited for the forward direction. for the the forward direction. Hence, the left, right bean and drop, especially the backward drop, the is Hence, left, right and especially the backward is challenging to carrybean out, using challenging to carry out, using the current system. Even experienced surgeons, find it difficult when current system. Even experienced surgeons, find it difficult when operating against or into the camera, operating against or into camera,the inoperator the current system. In the addition, the bean operator handle in the current system. Inthe addition, could handle modified dropcould task without the bean dropmaneuvers, task without any additional maneuvers, to the larger FoV. anymodified additional camera thanks to the largercamera FoV. The left, right,thanks and backward bean drops The left, right, and backward bean drops were performed with as much ease as a forward bean drop. were performed with as much ease as a forward bean drop. This advantage of large FoV visualization This large FoV can potentially eliminate the the need for a dedicated assistant can advantage potentiallyof eliminate thevisualization need for a dedicated assistant operating laparoscopic cameras, as is operating the laparoscopic cameras, as is required in the current system. Moreover, the inner of required in the current system. Moreover, the inner part of the surgical port is not occupiedpart by the the surgical port is not occupied by the cameras, during an operation, as the multiple cameras are cameras, during an operation, as the multiple cameras are deployed and flared outside of the port. deployed and of the Therefore, theorTCA theport number of ports or free Therefore, theflared TCA outside can reduce theport. number of ports free can up areduce surgical during an operation, up a surgical port during an operation, which is a prominent advantage. which is a prominent advantage. InInthis used commercially available miniaturized cameras to make the TCA. The thisstudy, study,wewe used commercially available miniaturized cameras to make the TCA. commercial cameras havehave theirtheir ownown design for for thethe lens, cables, and image The commercial cameras design lens, cables, and imagesensors. sensors.These Thesedesign design parameters of cameras affect the design of the TCA, especially for the port diameter. Because the parameters of cameras affect the design of the TCA, especially for the port diameter. Because the design design the commercial cameras not optimized for specific our specific application, the TCA in this study of the of commercial cameras is notisoptimized for our application, the TCA in this study has has a larger port diameter than the commercial ones. In future work, we will develop our own miniaturized cameras and thus decrease the port diameter to the same length as that of the commercial ones.
Micromachines 2018, 9, 431
11 of 13
a larger port diameter than the commercial ones. In future work, we will develop our own miniaturized cameras and thus decrease the port diameter to the same length as that of the commercial ones. The TCA has four aluminum arms for deploying multiple cameras, which occupy some space depending upon the arm length. However, as the intra-abdominal space is limited during laparoscopic surgery, the arm length should be as short as possible. The current TCA has 5 cm arm lengths and the orientation of the cameras are all parallel. The arm length can be shortened by adjusting the camera angles. Further development is needed for optimizing the arm length and the camera angles. We used an illumination system of the simulator box for imaging but the TCA does not have an illumination system. In this study, however, our first focus was to develop the TCA and the video stitching algorithm to demonstrate the advantages of the large-FoV visualization for laparoscopic surgery. In future work, we will add an illumination system into the TCA. Image stitching, as performed in this study, relies on a couple of assumptions. In order to speed the algorithm up to work on real-time video, as is necessary for surgical use, we assumed that our cameras would remain stationary relative to each other. Thus, if the camera array is used in such a way that the array is bent or deformed in any way, this method will fail to produce a good result. The second assumption has to do with the projection model of image stitching. When a camera is moved translationally through space, it will witness parallax, the phenomenon by which closer objects seem to move much faster than farther away objects. Parallax can lead to artifacts appearing in the stitched image, such as those we can see in Figure 8, where the cups seem to be misaligned. Since the image stitching is performed without any knowledge of the 3D geometry of the scene, it relies on the assumption that the portion of the scene that needs to be stitched lies approximately on a single plane in 3D space. These artifacts will become more noticeable as the distance between the cameras increases, the scene becomes less planar, or the cameras move closer to the scene. While some methods have been proposed for correcting parallax discontinuity [28–30], none of them have yet proved reliable and fast enough to be used to completely correct for parallax in the surgical setting. We are working on a real-time 3D reconstruction that can potentially remove parallax artifacts. In order to validate the potential clinical benefits of the large-FoV visualization, in the next phase, we will recruit surgeons, at various levels of training and experience, to perform numerous laparoscopic surgical tasks by using both current laparoscopic cameras and the TCA and make a comparison of the performance. 5. Conclusions The TCA offers large-FoV imaging, improves the efficiency of operation, and enlarges operation range. Moreover, the TCA frees up a surgical port and potentially eliminates the need for physical maneuvering of the laparoscopic camera by the surgeon or an operative assistant. Further development of the TCA and video stitching algorithm can offer a high-performance, more efficient imaging system for laparoscopic surgery. Supplementary Materials: The following are available online at http://www.mdpi.com/2072-666X/9/9/431/s1, Video S1: The setup and performance of a modified bean drop task. Video S2: Standard bean drop task. Video S3: Standard bean drop task without the mark. Video S4: Modified bean drop task. Author Contributions: Conceptualization, J.-J.K., A.W., H.L., J.A.G., C.P.H., Y.H.H. and H.J.; Data curation, J.-J.K., A.W. and Z.Z.; Formal analysis, J.-J.K. and A.W.; Funding acquisition, J.A.G., C.P.H., Y.H.H. and H.J.; Investigation, J.-J.K., A.W., H.L., Z.Z.; J.A.G., C.P.H., Y.H.H. and H.J.; Methodology, J.-J.K., A.W., H.L., J.A.G., C.P.H., Y.H.H. and H.J.; Project administration, H.J.; Resources, J.A.G., C.P.H., Y.H.H. and H.J.; Software, A.W. and Z.Z.; Supervision, Y.H.H. and H.J.; Validation, Y.H.H. and H.J.; Visualization, J.-J.K., A.W. and H.L.; Writing–original draft, J.-J.K., A.W., H.L., Z.Z. and H.J.; Writing–review & editing, J.-J.K., A.W., H.L., J.A.G., C.P.H., Y.H.H. and H.J. Funding: This research was funded by the U.S. National Institutes of Health (Grant number: R01EB019460). Acknowledgments: The authors would like to thank Jianwei Ke and John Jansky for their assistance and discussion. Conflicts of Interest: The authors declare no conflict of interest.
Micromachines 2018, 9, 431
12 of 13
References 1.
2. 3. 4.
5. 6.
7.
8.
9.
10.
11.
12.
13.
14.
15. 16.
17.
Keus, F.; de Jong, J.A.F.; Gooszen, H.G.; van Laarhoven, C.J.H.M. Laparoscopic versus open cholecystectomy for patients with symptomatic cholecystolithiasis. Cochrane Database Syst. Rev. 2006, 4, CD006231. [CrossRef] [PubMed] Sauerland, S.; Lefering, R.; Neugebauer, E.A.M. Laparoscopic versus open surgery for suspected appendicitis. Cochrane Database Syst. Rev. 2004, 4, CD001546. Schwenk, W.; Haase, O.; Neudecker, J.; Müller, J.M. Short term benefits for laparoscopic colorectal resection. Cochrane Database Syst. Rev. 2005, 20, CD003145. [CrossRef] [PubMed] Aziz, O.; Athanasiou, T.; Tekkis, P.P.; Purkayastha, S.; Haddow, J.; Malinovski, V.; Paraskeva, P.; Darzi, A. Laparoscopic versus open appendectomy in children: A meta-analysis. Ann. Surg. 2006, 243, 17–27. [CrossRef] [PubMed] Cullen, K.A.; Hall, M.J.; Golosinskiy, A. Ambulatory surgery in the United States, 2006. Natl. Health Stat. Rep. 2009, 11, 1–25. Sumi, Y.; Egi, H.; Hattori, M.; Suzuki, T.; Tokunaga, M.; Adachi, T.; Sawada, H.; Mukai, S.; Kurita, Y.; Ohdan, H. A prospective study of the safety and usefulness of a new miniature wide-angle camera: The “BirdView camera system”. Surg. Endosc. 2018, 1–7. [CrossRef] [PubMed] Naya, Y.; Nakamura, K.; Araki, K.; Kawamura, K.; Kamijima, S.; Imamoto, T.; Nihei, N.; Suzuki, H.; Ichikawa, T.; Igarashi, T. Usefulness of panoramic views for novice surgeons doing retroperitoneal laparoscopic nephrectomy: Original Article: Clinical Investigation. Int. J. Urol. 2009, 16, 177–180. [CrossRef] [PubMed] Kawahara, T.; Takaki, T.; Ishii, I.; Okajima, M. Development of broad-view camera unit for laparoscopic surgery. In Proceedings of the 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Minneapolis, MN, USA, 3–6 September 2009; pp. 927–930. Kobayashi, E.; Masamune, K.; Sakuma, I.; Dohi, T. A wide-angle view endoscope system using wedge prisms. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2000; Delp, S.L., DiGoia, A.M., Jaramaz, B., Eds.; Springer: Heidelberg, Germany, 2000; pp. 661–668. Yamauchi, Y.; Yamashita, J.; Fukui, Y.; Yokoyama, K.; Sekiya, T.; Ito, E.; Kanai, M.; Fukuyo, T.; Hashimoto, D.; Iseki, H.; et al. A dual-view endoscope with image shift. In CARS 2002 Computer Assisted Radiology and Surgery; Lemke, H.U., Inamura, K., Doi, K., Vannier, M.W., Farman, A.G., Reiber, J.H.C., Eds.; Springer: Heidelberg, Germany, 2002; pp. 183–187. Roulet, P.; Konen, P.; Villegas, M.; Thibault, S.; Garneau, P.Y. 360◦ endoscopy using panomorph lens technology. In Endoscopic Microscopy V; Tearney, G.J., Wang, T.D., Eds.; SPIE: Bellingham, WA, USA, 2010; Volume 7558, p. 75580T. Sagawa, R.; Sakai, T.; Echigo, T.; Yagi, K.; Shiba, M.; Higuchi, K.; Arakawa, T.; Yagi, Y. Omnidirectional vision attachment for medical endoscopes. In Proceedings of the 8th Workshop on Omnidirectional Vision, Camera Networks and Non-classical Cameras—OMNIVIS, Marseille, France, 17 October 2008. Arber, N.; Grinshpon, R.; Pfeffer, J.; Maor, L.; Bar-Meir, S.; Rex, D. Proof-of-concept study of the Aer-O-Scope omnidirectional colonoscopic viewing system in ex vivo and in vivo porcine models. Endoscopy 2007, 39, 412–417. [CrossRef] [PubMed] Ouzounov, D.G.; Rivera, D.R.; Webb, W.W.; Xu, C. High resolution, large field-of-view endomicroscope with optical zoom capability. In Endoscopic Microscopy VIII; Tearney, G.J., Wang, T.D., Eds.; SPIE: Bellingham, WA, USA, 2013; Volume 8575, p. 85750G. Tseng, Y.-C.; Han, P.; Hsu, H.-C.; Tsai, C.-M. A flexible FOV capsule endoscope design based on compound lens. J. Disp. Technol. 2016, 12, 1798–1804. [CrossRef] Seibel, E.J.; Fauver, M.; Crossman-Bosworth, J.L.; Smithwick, Q.Y.J.; Brown, C.M. Microfabricated optical fiber with microlens that produces large field-of-view video-rate optical beam scanning for microendoscopy applications. In Optical Fibers and Sensors for Medical Applications III—The International Society for Optical Engineering; Gannot, I., Ed.; SPIE: Bellingham, WA, USA, 2003; Volume 4957, p. 46. Rivera, D.R.; Brown, C.M.; Ouzounov, D.G.; Pavlova, I.; Kobat, D.; Webb, W.W.; Xu, C. Compact and flexible raster scanning multiphoton endoscope capable of imaging unstained tissue. Proc. Natl. Acad. Sci. USA 2011, 108, 17598–17603. [CrossRef] [PubMed]
Micromachines 2018, 9, 431
18. 19. 20. 21.
22.
23. 24. 25. 26. 27. 28.
29. 30.
13 of 13
Kagawa, K.; Kawahito, S. Endoscopic application of a compact compound-eye camera. Makara J. Technol. 2014, 18, 128–132. [CrossRef] Tamadazte, B.; Agustinos, A.; Cinquin, P.; Fiard, G.; Voros, S. Multi-view vision system for laparoscopy surgery. Int. J. Comput. Assist. Radiol. Surg. 2014, 10, 195–203. [CrossRef] [PubMed] Silvestri, M.; Ranzani, T.; Argiolas, A.; Vatteroni, M.; Menciassi, A. A multi-point of view 3D camera system for minimally invasive surgery. Procedia Eng. 2012, 47, 1211–1214. [CrossRef] Suzuki, N.; Hattori, A. Development of new augmented reality function using intraperitoneal multi-view camera. In Augmented Environments for Computer-Assisted Interventions; Linte, C.A., Chen, E.C.S., Berger, M.-O., Moore, J.T., Holmes, D.R., Eds.; Springer: Heidelberg, Germany, 2013; pp. 67–76. Kanhere, A.; Van Grinsven, K.L.; Huang, C.C.; Lu, Y.S.; Greenberg, J.A.; Heise, C.P.; Hu, Y.H.; Jiang, H. Multicamera laparoscopic imaging with tunable focusing capability. J. Microelectromech. Syst. 2014, 23, 1290–1299. [CrossRef] Kanhere, A.; Aldalali, B.; Greenberg, J.A.; Heise, C.P.; Zhang, L.; Jiang, H. Reconfigurable micro-camera array with panoramic vision for surgical imaging. J. Microelectromech. Syst. 2013, 22, 989–991. [CrossRef] Aldalali, B.; Fernandes, J.; Almoallem, Y.; Jiang, H. Flexible miniaturized camera array inspired by natural visual systems. J. Microelectromech. Syst. 2013, 22, 1254–1256. [CrossRef] Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-Up Robust Features (SURF). Comput. Vis. Image Understand. 2008, 110, 346–359. [CrossRef] Szeliski, R. Image alignment and stitching: A tutorial. Found. Trends®Comput. Graph. Vision 2006, 2, 1–104. [CrossRef] Zheng, M.; Chen, X.; Guo, L. Stitching video from webcams. In International Symposium on Visual Computing; Bebis, G., Ed.; Springer: Heidelberg, Germany, 2008; Volume 5359 LNCS, pp. 420–429. Watras, A.; Ke, J.; Zeng, Z.; Kim, J.-J.; Liu, H.; Jiang, H.; Hu, Y.H. Parallax mitigation for real-time close field video stitching. In Proceedings of the CSCI 2017: International Conference on Computational Science and Computational Intelligence, Las Vegas, NV, USA, 14 December 2017. Zhang, F.; Liu, F. Parallax-Tolerant Image Stitching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 3262–3269. Kang, S.B.; Szeliski, R.; Uyttendaele, M. Seamless Stitching Using Multi-Perspective Plane Sweep; Technical Report MSR-TR-2004-48; Microsoft Research: Redmond, WA, USA, 2004; pp. 1–14. © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).