Monday, June 6, 2011

The Windows 8 demo

The video below was released on Wednesday evening to coincide with Windows President Steven Sinofsky offering the first public demo of Windows 8 at the All Things Digital conference (a.k.a. D9). In this video, Jensen Harris, director of program management for the Windows User Experience, provides a quick walk-through and promises that more video demos will be coming soon.


Friday, May 27, 2011

Google Wallet Product Launch


Payments, offers, loyalty, and so much more

Google Wallet has been designed for an open commerce ecosystem. It will eventually hold many if not all of the cards you keep in your leather wallet today. And because Google Wallet is a mobile app, it will be able to do more than a regular wallet ever could, like storing thousands of payment cards and Google Offers but without the bulk. Eventually your loyalty cards, gift cards, receipts, boarding passes, tickets, even your keys will be seamlessly synced to your Google Wallet. And every offer and loyalty point will be redeemed automatically with a single tap via NFC.

Friday, May 20, 2011

Sony Flexible Color e-Paper 3D LCD

We saw some fancy panels and flashy lights on the show floor at SID this week, but Sony decided to keep its latest display offerings tucked away in an academic meeting. We're getting word today from Tech-On! that the outfit unveiled a 13.3-inch sheet of flexible color e-paper as well as two new glasses-free 3D panels in a separate session at the conference. New e-paper solutions loomed large at SID, but we were surprised by the lack of flexible screens. Sony's managed to deliver both on a display that weighs only 20 grams and measures a mere 150-microns thick, a feat made possible by the use of a plastic substrate. The sheet boasts a 13-percent color gamut, 10:1 contrast ratio, and 150dpi resolution.

As for the 3D LCD displays, Sony joined a slew of other manufacturers in showing off its special brand of the panels. These new displays, ranging from 10-inches to 23-inches, apparently employ a new method for delivering 3D to the naked eye. This particular method uses a backlight positioned between an LCD panel and another backlight for 2D images, and can be easily be switched off for 2D viewing. Of course we would have liked to see these screens in the flesh, but alas, Sony decided to play coy. Hop on past the break for a shot of the new 3D panel.

Thursday, May 19, 2011

Are you ready for Super Hi-Vision after HDTV ?


Sharp and NHK are showing off the world’s first Super Hi-Vision display, pointing the way to a future where high definition TV will be many times sharper than the HDTV we’re familiar with today.

This 85-inch prototype screen was jointly developed by Sharp Corporation and Japan Broadcasting Corporation (NHK), finally creating a monitor that can display the jaw-dropping ultra-high definition of the Super Hi-Vision format NHK has been working on since 1995.

How high is this Super Hi-Vision’s definition? To give you an idea, today’s HDTV resolution lets you recognize faces in the crowd, where Super Hi-Vision will allow you to determine whether the pupils in the eyes of one of those faces are dilated. I’ve seen a screen with just half this resolution, and even that is astonishing.

By the numbers, according to Sharp, the TV’s resolution is 16 times higher than a conventional HDTV, with a 33-megapixel screen made up of 7,680 x 4,320 pixels. Compare that with the relatively measly 1,920 x 1,080 pixels of the HDTV we are all so fond of, and you’ll agree that we’re in for a treat.

A Tweeting .NET Micro Framework Breathalyzer

This project shows how you can use the Netduino Plus to make a tweeting breathalyzer—a standalone breathalyzer that can post messages about the detected alcohol level to Twitter, using an inexpensive alcohol gas sensor.

The Netduino is an open source electronics platform based on a 32-bit microcontroller running the .NET Micro Framework. The Netduino Plus is similar to the original Netduino, but adds a built-in Ethernet controller and MicroSD slot. Since the Netduino Plus can connect directly to a network, it can independently communicate with Twitter’s API without being connected to a computer.

Hardware Overview

Img0058

The MakerShield is a simple prototyping shield that is compatible with the standard Arduino and Netduino boards.

In this configuration, the MQ-3 alcohol gas sensor will output an analog voltage between 0 and 3.3V to indicate the amount of alcohol detected. This output will be connected to one of the Netduino’s analog input pins and read by its ADC.

While it would possible to convert the sensor’s output to a numeric BAC level, this would require careful calibration and would be prone to error. For this project, I will use approximate value ranges to determine which of several messages should be posted to Twitter. An approximate reading will be displayed on an RGB LED.

RGB LED

The RGB LED is the primary status indicator. During normal operation, it shows the level of alcohol, represented by colors ranging from green to red.

Three transistors are used to provide power to the RGB LED. The microcontroller used on the Netduino has a relatively low current limit per IO pin (around 8 mA for most pins) so it is generally not advised to drive LEDs (which can require 20-30 mA) directly from these pins. Using a transistor (or another LED driver) helps ensure that enough power will be made available to each LED without damaging the Netduino.

This page shows some common transistor circuits, including a few "transistor as a switch" circuits. Since the RGB LED I am using has a common cathode (low side) lead, I am using PNP transistors to switch the anode (high side) of each color.

Read more

Tuesday, May 17, 2011

E Ink & Epson High Resolution ePaper

It's pretty obvious that this year's SID Display Week is shaping up to be a stage for the 300ppi extravaganza -- Samsung and LG were first to announce their latest high pixel density LCDs, and then Toshiba chimed in with its 367ppi LCD for cellphones. Fortunately, fans of ePaper will also have something to look forward to here, as E Ink Holdings and Epson have just announced the co-development of a 300-dpi ePaper device. To be exact, E Ink will be in charge of producing the sharp-looking 9.68-inch 2,400 x 1,650 display panel, whereas Epson will take care of the high-speed display controller platform to go with E Ink's display. No availability has been announced just yet, but stay tuned for our eyes-on impression at the show.

Sunday, May 15, 2011

The Fourth Paradigm: Data-Intensive Scientific Discovery

Presenting the first broad look at the rapidly emerging field of data-intensive science

Increasingly, scientific breakthroughs will be powered by advanced computing capabilities that help researchers manipulate and explore massive datasets.

The speed at which any given scientific discipline advances will depend on how well its researchers collaborate with one another, and with technologists, in areas of eScience such as databases, workflow management, visualization, and cloud computing technologies.

InThe Fourth Paradigm: Data-Intensive Scientific Discovery, the collection of essays expands on the vision of pioneering computer scientist Jim Gray for a new, fourth paradigm of discovery based on data-intensive science and offers insights into how it can be fully realized.

Critical Praise forThe Fourth Paradigm

“The individual essays—andThe Fourth Paradigm as a whole—give readers a glimpse of the horizon for 21st-century research and, at their best, a peek at what lies beyond. It’s a journey well worth taking.”

James P. Collins
School of Life Sciences, Arizona State University

Download the article(PDF)

Read the review online (subscription required)

From the Back Cover

“The impact of Jim Gray’s thinking is continuing to get people to think in a new way about how data and software are redefining what it means to do science."

Bill Gates, Chairman, Microsoft Corporation

“I often tell people working in eScience that they aren’t in this field because they are visionaries or super-intelligent—it’s because they care about science and they are alive now. It is about technology changing the world, and science taking advantage of it, to do more and do better.”

Rhys Francis, Australian eResearch Infrastructure Council

“One of the greatest challenges for 21st-century science is how we respond to this new era of data-intensive science. This is recognized as a new paradigm beyond experimental and theoretical research and computer simulations of natural phenomena—one that requires new tools, techniques, and ways of working.”

Douglas Kell, University of Manchester

“The contributing authors in this volume have done an extraordinary job of helping to refine an understanding of this new paradigm from a variety of disciplinary perspectives.”

Gordon Bell, Microsoft Research

Friday, May 13, 2011

Panaromic Images using Microsoft ICE


Panoramic View of Dublin using MS ICE


It’s very easy to use just drag and drop set of pictures and ICE will make a panoramic exportable image for you…

Microsoft Image Composite Editor is an advanced panoramic image stitcher. Given a set of overlapping photographs of a scene shot from a single camera location, the application creates a high-resolution panorama that seamlessly combines the original images. The stitched panorama can be shared with friends and viewed in 3D by uploading it to the Photosynth web site. Or the panorama can be saved in a wide variety of image formats, from common formats like JPEG and TIFF to the multiresolution tiled format used by Silverlight's Deep Zoom and by the HD View andHD View SL panorama viewers.

New features through version 1.3.5

-->Accelerated stitching on multiple CPU cores

-->Ability to publish, view, and share panoramas on the Photosynth web site

-->Support for "structured panoramas" — panoramas consisting of hundreds of photos taken in a rectangular grid of rows and columns (usually by a robotic device like the GigaPan tripod heads)

-->No image size limitation — stitch gigapixel panoramas

-->Support for input images with 8 or 16 bits per component

-->Ability to read raw images using WIC codecs

-->Photoshop layer and large document support

Additional features

-->State-of-the-art stitching engine

-->Automatic exposure blending

-->Choice of planar, cylindrical, or spherical projection

-->Orientation tool for adjusting panorama rotation

-->Automatic cropping to maximum image area

-->Native support for 64-bit operating systems

-->Wide range of output formats, including JPEG, TIFF, BMP, PNG, HD Photo, and Silverlight Deep Zoom

Thursday, April 28, 2011

1d, 2d, 3d Now 4d Barcodes

To increase the capacity of two dimensional barcodes a third dimension, color can be added. These 3d codes are available already as noted in a previous post Color C Code and now researchers are looking at adding a fourth dimension, time. The image below shows what they may look like and this paper provides further information, Unsynchronized 4D Barcodes (pdf).

4d barcode in action

Wednesday, March 30, 2011

Carlitos’ Projects: Speech-Controlled Arduino Robot

We all dream of having appliances and machines that can obey our spoken commands. Well, let’s take the first step towards making this happen. In this second iteration of Carlitos’ Projects, we are going to build a speech-controlled Arduino-based robot.

Speech Controlled Arduino Robot

You may be thinking that making such a robot must be a very complex task. After all, humans take many years before they can understand speech properly. Well, it is not as difficult as you may think and it is definitely lots of fun. The video below illustrates how to make your own speech-controlled Arduino rover.

After watching the video, read below the detailed list of parts and steps required to complete the project.

Materials

  • A DFRobotShop Rover kit. It constitutes the robot to be controlled.
  • A VRbot speech recognition module. It processes the speech and identifies the commands.
  • Two Xbee RF communication modules. They create a wireless link between the speech recognition engine and the robot.
  • An Arduino Uno. Controls the speech recognition module.
  • An IO expansion shield. Allows to connect the Xbee module to the DFRobotShop Rover
  • An Xbee shield. Allows to connect an Xbee module to the Arduino Uno.
  • Male headers. They are required by the Xbee shield.
  • A barrel jack to 9V battery adaptor. Allows to power the Arduino Uno trough a 9V battery.
  • An LED. It is not required since the IO expansion shield already has one but it can provide a more visible activity feedback.
  • An audio jack. It will be used to connect the microphone. This is optional
  • A headset or a microphone (a microphone is included with the speech recognition module).

Tools

  • A Wire Cutter. It will be used to cut the leads off components.
  • A Soldering Iron. In order to solder all the (many) connections, a soldering station might be preferable since it provides steady and reliable temperature control that allows for easier and safer soldering (you have less risk of burning the components if the temperature is set correctly).
  • A Third Hand. This is not absolutely required, but it is always useful for holding components and parts when soldering.
  • A Hot-glue gun in order to stick the components together.
  • A computer . It programs the DFRobotShop Rover and the Arduino Uno using the Arduino IDE.

Putting it Together

  1. Assemble the DFRobotShop Rover and mount the IO expansion shield, an Xbee Module and the LED. Se the picture above or the video for further information.
  2. Solder the headers onto the Xbee shield. Also solder four headers on the prototyping area as shown below. Do not like soldering? Then keep reading since there is no-solder-required version of the project.
    Speech Engine - 2
  3. Connect the four headers to the corresponding pins as shown below.
    Speech Engine - 3
  4. As shown above, you can also mount the headphone jack and use the cable included with the microphone in order to connect it to the VRbot module microphone input.
  5. Put the shield onto the Arduino and connect the battery.
    Speech Engine - 4
  6. Connect the VRbot speech recognition module wires and the microphone.
    Speech Engine - Back
  7. Program the DFRobotShop Rover and the Arduino Uno with these programs respectively:
    dfrobotshop_serial.zip and VRbot.zip
  8. Start talking to your robot! Say “forward”, “backward”, “left”, or “right” in order to make the robot move in the desired direction. The word “move” shown in the video has been removed from the program in order to improve the performance.

More at ... http://www.robotshop.com/gorobotics/