Automated Micro and Nanoscale Assembly using Optical Tweezers
S.K. Gupta, A. Balijepalli, A. Banerjee, S. Chowdhury, T. LeBrun, T. Peng, and A. Varshney
This project is sponsored by NSF and NIST.
Micromanipulation, nanomanipulation, optical tweezers, and assembly planning
Optical tweezers can trap and move a variety of microscale and nanoscale components without physical contact and hence without damaging components due to stiction or deformation caused by contact forces. At the same time, optical tweezers provide a broad range of positioning and orienting capabilities to place components at the desired locations in the workspace. By utilizing multiple trapping beams, multiple operations can be performed in parallel and the instrumentation can be based on inexpensive lasers and piezo-actuators. Thus the technique can scale to production in terms of both cost and efficiency, making optical tweezers a very promising technology for micro and nanoscale assembly. Currently, optical tweezers are mainly used in research laboratories. In order to use optical tweezers in production processes, the following challenges need to be addressed:
- The overall operation speed has to increase considerably to ensure that manufacturing can be performed in a cost-competitive manner.
- The overall operation yield has to increase considerably to ensure that a large number of assembly operations can be performed without encountering assembly errors.
- The reliance on highly trained expert human operators has to decrease considerably to ensure wide spread use of this technology.
We believe that addressing these challenges will make optical tweezers a viable technology for prototyping nanoscale electronic devices, manufacturing customized nano-structures for bio-medical application, and repair and rework of nano-structures produced using other processes.
The objectives of this project are:
- Development of 3D imaging system for on-line monitoring of the assembly process. This will ensure that the system is aware of positions and orientations of all the components in the workspace, thereby decreasing assembly errors. This capability is also a prerequisite for autonomous operation.
- Development of planning algorithms for automated operations. The system must able to perform assembly operations in automated manner. The human operator will have high-level control and manual override capabilities. Under normal operating conditions, the system will automatically generate the traps and transport components.
Overview of Approach
On-Line Monitoring: On-line monitoring requires a new vision system for 3D optical microscopy of workspace at video frame rates. Fast 3D imaging is important for operator feedback while prototyping new devices using optical tweezers, and requires new techniques to recognize, track and visualize micro and nanoscale components. Note that while the resolution of traditional optical microscopy is insufficient to resolve nanostructures, they can be observed in the optical microscope (e.g. nanowires, quantum dots) and their positions measured with nanometer-scale resolution. This allows us to use optical techniques to follow nanoassembly processes.
Recent advances in ultramicroscopy are also pushing the resolution of optical microscopy to 50 nm and below; so optical microscopy can serve as a key tool for nanoscale measurements. Development of a new 3D vision system requires analyzing a stack of images produced by a camera mounted on the optical tweezers and from these images identifying the components present in the workspace and estimating their locations in 3D space. We first segment each image in the stack into connected regions. Each connected region is analyzed for the presence of a component signature and is used to estimate the type, size, location, and orientation of the component.
Estimates generated from various different regions are used to generate an overall estimate. Due to the three dimensional nature of various components, each component leaves signatures in multiple different images. Hence, the estimates generated from one image can be combined with the estimates from a different image. The overall estimates are used to compute and render a synthetic 3D scene showing the current state of the workspace.
In order to be useful in automated planning, we need to compute the 3D scene in a very short amount of time and update the scene at least ten frames per second. Hence, we are developing efficient algorithms for this task. This requires identifying the best possible component signatures to use as well as developing efficient algorithms to verify presence of a signature in an image region.
Planning Algorithms: Assembling micro and nanoscale components involves trapping them and moving them to the desired locations. This requires moving the components through the workspace while avoiding collision with the other components in the workspace.
Untrapped components in the workspace also constantly move due to random Brownian motion. Hence the workspace configuration constantly changes. The trapping laser can also be time shared to move multiple components. Hence the laser can also be used to move components that are in the way of the target component to clear the path. The physics of trapping imposes constraints on the speed at which the laser can move a trapped particle through the fluidic workspace.
Moreover, there are also constraints on the shape of the trap and clearance that need to be maintained between the trap and the other components in the workspace. In order to perform planning, we are identifying and modeling relevant constraints in a geometric framework.
We have formulated the motion-planning problem with the goal of delivering a component to its desired location in the minimum possible expected time. The nominal transport time information is combined with the expected collision circumvention time to compute the expected time for completing the nominal path. Two types of collision circumvention strategies are pursued: local path alterations to circumvent imminent collisions, and trapping obstructions to remove them. On the occasions when these strategies fail, we plan to use recovery plans to cope up with unavoidable collisions.
We are developing efficient algorithms for nominal path planning, collision circumvention planning, and post-collision recovery planning.
For additional information please contact:
Dr. Satyandra K. Gupta
Department of Mechanical Engineering and Institute for Systems Research
3143 Martin Hall
University of Maryland
College Park, MD-20742
Project Website: http://terpconnect.umd.edu/~skgupta/OT.html