InstantEye’s small size and low weight make it ideal for operation in constrained, indoor environments. A skilled pilot can navigate InstantEye via the first person view several rooms away (up to about 200 feet away indoors). However, we want to make flying InstantEye indoors as intuitive as it is to fly outdoors. To accomplish this, we have taken a simultaneous co-design approach to software and hardware to achieve maximum efficiency in the lightest possible payload. Like all things InstantEye, the driving ethos behind our design methodology is building for very low size, weight, power, and cost (SWaP-C).
The first challenge to indoor flight is obstacle avoidance and collision rejection. The indoor payload uses multiple sonars and a wide-angle, forward-looking camera to detect large obstacles – such as walls, furniture, and people – and avoid them. Downward and upward sonars maintain a safe vertical distance from the floor and ceiling and enable precise ranging for the monocular optical odometry. This allows the user zip down a hallway at “jogging speed” while maintaining collision-free flight. Unfortunately, sonar and vision can’t detect all obstacles, with 100% accuracy, 100% of the time, and collisions with challenging objects will inevitably occur (such as twigs, windows, or fast-moving obstacles). Optionally installed propeller guards, in conjunction with InstantEye’s ultra-fast reflexive controller, reject these residual collisions.
The second challenge of indoor operation is localization without GPS. PSI has adapted the work of Ethan Eade and Tom Drummond on monocular camera Simultaneous Localization and Mapping (SLAM). To augment their Graph-SLAM approach, we use a downward looking camera for hyper-robust optical odometry, and the onboard gyros for attitude updates. The fusion between these three sources of information leads to very good estimation of the local surroundings (mapping) and own-self position (localization). The global map is rectified with a background bundle adjustment algorithm that aligns the edges between multiple graph nodes. A background loop-closure detection algorithm uses an adaptive bag of words to recognize features and scenes that have been previously observed. Sonar and downward vision aid the vehicle in rejection of false-positives in hyper-repetitive visual environments.