Tuesday, May 17, 2011

E Ink & Epson High Resolution ePaper

It's pretty obvious that this year's SID Display Week is shaping up to be a stage for the 300ppi extravaganza -- Samsung and LG were first to announce their latest high pixel density LCDs, and then Toshiba chimed in with its 367ppi LCD for cellphones. Fortunately, fans of ePaper will also have something to look forward to here, as E Ink Holdings and Epson have just announced the co-development of a 300-dpi ePaper device. To be exact, E Ink will be in charge of producing the sharp-looking 9.68-inch 2,400 x 1,650 display panel, whereas Epson will take care of the high-speed display controller platform to go with E Ink's display. No availability has been announced just yet, but stay tuned for our eyes-on impression at the show.

Sunday, May 15, 2011

The Fourth Paradigm: Data-Intensive Scientific Discovery

Presenting the first broad look at the rapidly emerging field of data-intensive science

Increasingly, scientific breakthroughs will be powered by advanced computing capabilities that help researchers manipulate and explore massive datasets.

The speed at which any given scientific discipline advances will depend on how well its researchers collaborate with one another, and with technologists, in areas of eScience such as databases, workflow management, visualization, and cloud computing technologies.

InThe Fourth Paradigm: Data-Intensive Scientific Discovery, the collection of essays expands on the vision of pioneering computer scientist Jim Gray for a new, fourth paradigm of discovery based on data-intensive science and offers insights into how it can be fully realized.

Critical Praise forThe Fourth Paradigm

“The individual essays—andThe Fourth Paradigm as a whole—give readers a glimpse of the horizon for 21st-century research and, at their best, a peek at what lies beyond. It’s a journey well worth taking.”

James P. Collins
School of Life Sciences, Arizona State University

Download the article(PDF)

Read the review online (subscription required)

From the Back Cover

“The impact of Jim Gray’s thinking is continuing to get people to think in a new way about how data and software are redefining what it means to do science."

Bill Gates, Chairman, Microsoft Corporation

“I often tell people working in eScience that they aren’t in this field because they are visionaries or super-intelligent—it’s because they care about science and they are alive now. It is about technology changing the world, and science taking advantage of it, to do more and do better.”

Rhys Francis, Australian eResearch Infrastructure Council

“One of the greatest challenges for 21st-century science is how we respond to this new era of data-intensive science. This is recognized as a new paradigm beyond experimental and theoretical research and computer simulations of natural phenomena—one that requires new tools, techniques, and ways of working.”

Douglas Kell, University of Manchester

“The contributing authors in this volume have done an extraordinary job of helping to refine an understanding of this new paradigm from a variety of disciplinary perspectives.”

Gordon Bell, Microsoft Research

Friday, May 13, 2011

Panaromic Images using Microsoft ICE


Panoramic View of Dublin using MS ICE


It’s very easy to use just drag and drop set of pictures and ICE will make a panoramic exportable image for you…

Microsoft Image Composite Editor is an advanced panoramic image stitcher. Given a set of overlapping photographs of a scene shot from a single camera location, the application creates a high-resolution panorama that seamlessly combines the original images. The stitched panorama can be shared with friends and viewed in 3D by uploading it to the Photosynth web site. Or the panorama can be saved in a wide variety of image formats, from common formats like JPEG and TIFF to the multiresolution tiled format used by Silverlight's Deep Zoom and by the HD View andHD View SL panorama viewers.

New features through version 1.3.5

-->Accelerated stitching on multiple CPU cores

-->Ability to publish, view, and share panoramas on the Photosynth web site

-->Support for "structured panoramas" — panoramas consisting of hundreds of photos taken in a rectangular grid of rows and columns (usually by a robotic device like the GigaPan tripod heads)

-->No image size limitation — stitch gigapixel panoramas

-->Support for input images with 8 or 16 bits per component

-->Ability to read raw images using WIC codecs

-->Photoshop layer and large document support

Additional features

-->State-of-the-art stitching engine

-->Automatic exposure blending

-->Choice of planar, cylindrical, or spherical projection

-->Orientation tool for adjusting panorama rotation

-->Automatic cropping to maximum image area

-->Native support for 64-bit operating systems

-->Wide range of output formats, including JPEG, TIFF, BMP, PNG, HD Photo, and Silverlight Deep Zoom

Thursday, April 28, 2011

1d, 2d, 3d Now 4d Barcodes

To increase the capacity of two dimensional barcodes a third dimension, color can be added. These 3d codes are available already as noted in a previous post Color C Code and now researchers are looking at adding a fourth dimension, time. The image below shows what they may look like and this paper provides further information, Unsynchronized 4D Barcodes (pdf).

4d barcode in action

Wednesday, March 30, 2011

Carlitos’ Projects: Speech-Controlled Arduino Robot

We all dream of having appliances and machines that can obey our spoken commands. Well, let’s take the first step towards making this happen. In this second iteration of Carlitos’ Projects, we are going to build a speech-controlled Arduino-based robot.

Speech Controlled Arduino Robot

You may be thinking that making such a robot must be a very complex task. After all, humans take many years before they can understand speech properly. Well, it is not as difficult as you may think and it is definitely lots of fun. The video below illustrates how to make your own speech-controlled Arduino rover.

After watching the video, read below the detailed list of parts and steps required to complete the project.

Materials

  • A DFRobotShop Rover kit. It constitutes the robot to be controlled.
  • A VRbot speech recognition module. It processes the speech and identifies the commands.
  • Two Xbee RF communication modules. They create a wireless link between the speech recognition engine and the robot.
  • An Arduino Uno. Controls the speech recognition module.
  • An IO expansion shield. Allows to connect the Xbee module to the DFRobotShop Rover
  • An Xbee shield. Allows to connect an Xbee module to the Arduino Uno.
  • Male headers. They are required by the Xbee shield.
  • A barrel jack to 9V battery adaptor. Allows to power the Arduino Uno trough a 9V battery.
  • An LED. It is not required since the IO expansion shield already has one but it can provide a more visible activity feedback.
  • An audio jack. It will be used to connect the microphone. This is optional
  • A headset or a microphone (a microphone is included with the speech recognition module).

Tools

  • A Wire Cutter. It will be used to cut the leads off components.
  • A Soldering Iron. In order to solder all the (many) connections, a soldering station might be preferable since it provides steady and reliable temperature control that allows for easier and safer soldering (you have less risk of burning the components if the temperature is set correctly).
  • A Third Hand. This is not absolutely required, but it is always useful for holding components and parts when soldering.
  • A Hot-glue gun in order to stick the components together.
  • A computer . It programs the DFRobotShop Rover and the Arduino Uno using the Arduino IDE.

Putting it Together

  1. Assemble the DFRobotShop Rover and mount the IO expansion shield, an Xbee Module and the LED. Se the picture above or the video for further information.
  2. Solder the headers onto the Xbee shield. Also solder four headers on the prototyping area as shown below. Do not like soldering? Then keep reading since there is no-solder-required version of the project.
    Speech Engine - 2
  3. Connect the four headers to the corresponding pins as shown below.
    Speech Engine - 3
  4. As shown above, you can also mount the headphone jack and use the cable included with the microphone in order to connect it to the VRbot module microphone input.
  5. Put the shield onto the Arduino and connect the battery.
    Speech Engine - 4
  6. Connect the VRbot speech recognition module wires and the microphone.
    Speech Engine - Back
  7. Program the DFRobotShop Rover and the Arduino Uno with these programs respectively:
    dfrobotshop_serial.zip and VRbot.zip
  8. Start talking to your robot! Say “forward”, “backward”, “left”, or “right” in order to make the robot move in the desired direction. The word “move” shown in the video has been removed from the program in order to improve the performance.

More at ... http://www.robotshop.com/gorobotics/