Making food with 3D printers is not a new concept, but it is still largely in the realm of science fiction. NASA wants to make science fiction into reality sooner than later, however, and it’s throwing plenty of money towards those at the cutting edge of the technology.
Quartz reports that NASA has awarded Systems & Materials Research Corporation a $125,000 grant to continue work on what company head, Anjan Contractor, calls a universal food synthesizer. As currently envisioned, the technology would use cartridges of powders and oils to create complex foods one layer at a time.
NASA is understandably interested in the technology as it would provide plenty of inexpensive food to space travelers. The current goal is to have the food cartridges last up to 30 years. It would ensure that any long distance space travel plans to Mars and beyond wouldn’t suffer from food spoilage.
Of course, space travel isn’t the only thing that this particular 3D printer would make easier. Feeding the world’s population would be a cinch if everybody owned a 3D printer and a number of inexpensive food cartridges that only doled out what a person needs so no food is wasted. It seems impossible with our current food production methods, but Contractor’s plans could very well end world hunger.
The first step in space travel and ending world hunger may just lie in the humble pizza. America’s favorite food seems to be perfectly suited to the 3D printing process as one layer of food is added at a time. In the case of pizza, the dough would be extruded onto a heated plate that bakes the dough as its being printed. Afterwards, a tomato powder would be added while being mixed with water and oil to create the sauce. Finally, a “protein layer” made up of plants or animals would be added to the top.
A 3D pizza printer may sound like some kind of revolutionary new concept, but NASA has been playing around with 3D printers for quite some time. The agency is even looking into whether or not it could deploy 3D printers to the surface of the moon to build 3D printed structures out of lunar soil.
Former Academic Advisor for the USF College of Engineering
2002 to 2006
Worked with the undergraduate engineering students, primarlily their first 2 years prior to declaring an engineering discipline.
Kate Johnson, Director of Academic Avising
Former student recommendation
Created the Annual Engineering Open House
Assisted in development of the College of Engineering blackboard
Assisted in the development of student chapters of AIAA, Theta Tau, and Robotics Club. Participated in student engineerig groups,ASRAE, NSBE, SPHE, Engineering Expo, and FGLAMP.
Goal: Raise scholarship money for more African-American girls to study engineering at USF.
Sharing TIME stories with friends is easier than ever. Add TIME to your Timeline.
The best slow-motion shots money can buy usually are shot at a rate of between 5,000 and 10,000 frames per second — we’re talking seriously slow, like bullets shattering glass in beautiful, explosive detail.
Researchers at the University of California, Los Angeles (UCLA) have developed something much faster: a camera capable of recording 36.7 million frames per second. Of course, the high-throughput imaging flow analyzer, as it’s called, won’t be used to take awesome slow-motion shots.
Instead, it’s meant to pick out rare cancer cells in blood samples. According to UCLA, it can screen 100,000 cells per second, which is 100 times faster than other blood analyzers. Why is this important? Because a handful of cancer cells hiding in a billion healthy cells can eventually metastasize into full-blown, fatal cancer.
Today, labs use microscopes equipped with digital cameras to screen blood samples for signs of cancer. The problem with traditional CCD and CMOS cameras are that they just aren’t fast or sensitive enough to do it efficiently; the higher the speed, the less sensitive to light they become, making for poor-quality pictures.
What makes the high-throughput imaging flow analyzer different? Basically, it forces particles through a narrow channel, where laser pulses bounce off them and are recorded by something called an optoelectronic time-stretch image processor. The result is real-time analysis of fluid flowing through the device at a speedy four meters per second with no motion blur.
Not only is it fast, it’s also accurate, with a false-positive rate of one cell in a million. The researchers say it could also be useful in other areas of science, such as scanning vast amounts of seawater for phytoplankton.
Still, its main purpose, according to the report’s lead author Keisuke Goda, is to “reduce errors and costs in medical diagnosis.” Not as fun as watching bullets fire in slow-motion, but a lot a more useful to humanity.
Read more: http://techland.time.com/2012/07/09/ucla-researchers-build-worlds-fastest-camera-to-screen-for-cancer/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+timeblogs%2Fnerd_world+%28TIME%3A+Techland%29#ixzz20AF9EDTk————————————————————
Supercameras Could Capture Never-Before-Seen Detail
A supercamera that can take gigapixel pictures — that’s 1,000 megapixels — has now been unveiled.
Researchers say these supercameras could have military, commercial and civilian applications, and that handheld gigapixel cameras may one day be possible.
The gigapixel camera uses 98 identical microcameras in unison, each armed with its own set of optics and a 14-megapixel sensor. These microcameras, in turn, all peer through a single large spherical lens to collectively see the scene the system aims to capture. Since the optics of the microcameras are small, they are relatively easy and cheap to fabricate.
A specially designed electronic processing unit stitches together all the partial images each microcamera takes into a giant, one-gigapixel image. In comparison, film can have a resolution of about 25 to 800 megapixels, depending on the kind of film used.
“In the near-term, gigapixel cameras will be used for wide-area security, large-scale event capture — for example, sport events and concerts — and wide-area multiple-user scene surveillance — for example, wildlife refuges, natural wonders, tourist attractions,” said researcher David Brady, an imaging researcher at Duke University in Durham, N.C., told InnovationNewsDaily. “As an example, a gigapixel camera mounted over the Grand Canyon or Times Square will enable arbitrarily large numbers of users to simultaneously log on and explore the scene via telepresence with much greater resolution than they could if they were physically present.”
Gigapixel cameras may have scientific value. For instance, a gigapixel snapshot of the Pocosin Lakes National Wildlife Refuge allowed details such as the number of tundra swans on the lake or in the distant sky at that precise moment to be seen, allowing researchers to track individual birds and analyze behavior across the flock. Very wide-field surveillance of the sky is possible as well, enabling analysis of events such as meteor showers.
“I believe that the need to store, manage and mine these data streams will be the definitive application of supercomputers,” Brady said.
The gigapixel device currently delivers one-gigapixel images at a speed of about three frames per minute. It actually captures images in less than a tenth of a second — it just takes 18 seconds to transfer the full image from the microcamera array to the camera’s memory.
The camera also currently only takes black-and-white images, since color pictures are more difficult to analyze. “Next-generation systems will be color cameras,” Brady said.
In addition, the camera is quite large, measuring 29.5 by 29.5 by 19.6 inches (75 by 75 by 50 centimeters), a size required by the space currently needed to cool its electronics and keep them from overheating. The researchers hope that as more efficient and compact electronics get developed, handheld gigapixel cameras might one day emerge, similar in size to current handheld single-lens reflex (SLR) cameras.
“Of course, it is not possible for a person to hold a camera steady enough to capture the full resolution of a gigapixel camera, so it may be desirable to mount the camera on a tripod,” Brady said. “On the other hand, motion compensation strategies may overcome this challenge.”
The researchers are also working on more powerful cameras. They have currently built a two-gigapixel prototype camera that possesses 226 microcameras, and are in the manufacturing phase for a 10-gigapixel system. Ten- to 100-gigapixel cameras “will remain more backpack-size rather than handheld,” Brady said.
The scientists detailed their findings in the June 21 issue of the journal Nature.
- Bigger is Better: 10 Huge Images of the Planet
- Software Turns Low-Res Photos into 3D Renders
- New Camera Focuses Photos After They Are Taken
Copyright 2012 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.