Over the last year, we’ve described many opportunities at the intersection of computing and healthcare as well as computing and sustainability — and there are a couple great examples in the press this week.
Researchers at the University of Washington have engineered Raven II, a new surgical robot with wing-like arms predicated on an open source platform that can perform surgery on simulated patients — with the aims of speeding up procedures, reducing errors, and improving patient outcomes:
The latest version of the Raven has mechanical wrists that hold tiny pincers. Coming soon is a piece that will allow research groups to attach the same tools used by commercial surgical robots.
The robots were developed by [University of Washington] electrical engineering professor Blake Hannaford and Jacob Rosen, a former UW faculty member who is now an associate professor of computer engineering at UC Santa Cruz.
Until now, most research on surgical robotics in the United States has meant creating new software for commercial robots [more after the jump].
“Academic researchers have had limited access to these proprietary systems,” Rosen said. “We are changing that by providing high-quality hardware developed within academia. Each lab will start with an identical, fully operational system, but they can change the hardware and software and share new developments and algorithms, while retaining intellectual property rights for their own innovations.”
[Raven II] … has more compact electronics and dexterous hands that can hold wristed surgical tools, like the newest commercial machines. A surgeon sitting at a screen can look through Raven’s cameras and guide the instruments to perform a task such as suturing. The system, while not approved by the Food and Drug Administration, is precise enough to support research on advanced robotic-surgery techniques…
The UW group is making its software work with the Robot Operating System, a popular open-source robotics code, so groups can easily connect the Raven to other devices…
The hope is that the common, open-source platform will allow research groups to share software, replicate experiments and collaborate. Participating schools’ specialties include:
- Harvard mechanical engineers working on “beating-heart” surgery, where a robot compensates for the movement of a beating heart so a surgeon can operate as if on a static surface.
- Johns Hopkins computer scientists working on image analysis, superimposing the surgeon’s field of view on standard medical images.
- UW research on force feedback, using machine intelligence to create barriers around things a surgeon needs to avoid, and attractive force fields around objects the surgeon wants to touch.
Check out a neat video of Raven at work:
And to learn more, see the UW press release in its entirety.
A wristband that plugs you into smart buildings
Meanwhile, New Scientist describes WristQue, a MIT project that “aims to create a low-power wristband device that works with sensors embedded in buildings to monitor how you feel and continually adjust the lighting and temperature to keep you happy”:
Sweltering in the office while your colleague shivers under layers of extra clothing? Just register your discomfort by tapping a button on your wrist and let the room do the rest.
That’s one of the ideas behind WristQue… the key to controlling “the immersive world of interactive media that will one day surround us”, says Joe Paradiso, director of the Responsive Environments Group at MIT’s Media Lab, who is working with colleagues to design it.
Each 3D-printed, plastic WristQue band will contain a microprocessor and will be packed with environmental sensors to detect changes in temperature, humidity and light. It will be fitted with a chip that uses ultra-wideband radio signals to pinpoint the user’s location and will be able to communicate wirelessly with sensors fitted in smart buildings.
WristQue is designed to be simple and unobtrusive, says Paradiso. It will only have three buttons – two of which will allow users to indicate they are either too warm or too cold. The third will activate gestural controls, so users can interact with any devices nearby, such as televisions or computers. “People can gesture with Kinect but it doesn’t know who you are – we’re thinking of a device that can do that, but without distracting you like a PDA,” says Paradiso.
So far, the team has developed and tested the climate control parts of the device in the Media Lab building. This is fitted with motion sensors, which detect whether the room is occupied. If someone is present, but hasn’t specified whether they would like the temperature to change, the system sets the temperature to a default level. When users do press the hot or cold buttons, the temperature is changed to suit the majority of people in the room. This can be achieved by opening and closing windows, or activating the air conditioning. Environmental sensors outside the building let the system predict the likely temperature change inside a room if the windows were to be opened.
Using the room’s motion sensor data from the previous week, the system’s software also predicts when the room will next be occupied, and by whom. This is used to bring the room up to a pleasant temperature before people arrive. A three-week trial saw a 24 per cent reduction in energy usage because less air-conditioning was needed to keep all occupants comfortable…
For more, read the full New Scientist story.
Learn more about the many research opportunities we’ve described in these areas here. And if you have a cool research result to report, please share it with us here — and we’ll feature it as a Computing Research Highlight of the Week.
(Contributed by Erwin Gianchandani, CCC Director)