The IPSO Alliance is announcing the Internet of Things 2013 Innovation Contest that invites people and companies from around the world to submit new Internet Protocol enabled Smart Objects that demonstrate the power of the Internet of Things. The goal of this contest is to bring forth exciting new concepts and devices that use the Internet Protocol in interconnecting embedded sensor and control solutions in areas of home control, smart building, healthcare, lighting control, smart energy, or consumer entertainment.
“The IPSO Alliance is committed to showing the benefits of using the Internet Protocol in the design and development of M2M and IoT projects and solutions. This contest will help demonstrate just how powerful IP Smart Objects can be and how, by shedding proprietary protocols, we can advance the developments that let us connect the Internet with our physical world”, said Geoff Mulligan, IPSO Chairman.
The entries will be judged by a panel of experts and the top designs will be brought to and demonstrated at Sensors Expo June 4-6 at the Donald E. Stephens Convention Center in Rosemont, IL. The finalists will receive the opportunity to meet with the judges and present their device to the attendees of the show. The winning submission will receive a $10,000 prize.
Details and rules of the contest will be published on the IPSO Alliance website at the beginning of February.
Thingsquare will offer two courses for professionals: one introductory course for IT decision makers who want to quickly get up to speed with what the Internet of Things will mean to them and their businesses, and one hardcore course for developers who want to use Thingsquare Mist and Contiki to build the next Internet of Things product.
They will hold two course events in Stockholm, Sweden during March:
Building the Internet of Things with Thingsquare Mist and Contiki. For hardcore developers. The first occasion will be in Stockholm, Sweden, March 4-5.
Internet of Things for Decision Makers. For the busy IT decision maker. March 19, Stockholm, Sweden.
Now go to their website and sign up!
CC1180-based demonstration with Sensinode from CES 2013.
A video is available here.
Google is ridiculously powerful. The service isn’t just search. It isn’t just maps. It isn’t just your email or spreadsheets. Google is artificial intelligence fueled by an endless buffet of every piece of information on the Internet and every human tendency behind it. Google isn’t a website or a collection of services; it’s the most powerful deity in the known universe. And ultimately, it’s strange that so much thought can exist only behind a PC or smartphone screen.
So in 2011, Google Creative Lab approached Berg with a question: “If Google wasn’t trapped behind glass, what would it do?” The answer to that question consumed the entire studio for months. Ultimately, their answer was that computer vision–think technologies like Kinect–would meld with 3-D projection–think uber VJing–to become a sort of material of its very own.
At the heart of Berg’s concept was a smart lamp inspired by Pixar’s Luxo Jr. This lamp would see you all the time, and it would project a “Smart Light” right onto your workspace. It’s a light that would need to be more than a mere augmented reality layer for analog objects, it would have to be what Berg began calling the “little brain” to Google’s “big brain” in the cloud. Think of the little brain as a tiny, playful companion–a digital embodiment of a puppy–to humanize the experience of interaction and make data more approachable. Even though the little brain can’t be seen literally in Berg’s final videos, you can spot its potential in a companion app they called Text Camera. By modeling software after a puppy, training Google to be context-aware feels rewarding.
So where were we? Right. Berg had been working on mostly theoretical technology. They had this lamp with projection and visual tracking. But how would they practically glue projection to objects? How would the lamp know what to look at and where to project? That breakthrough came in what Berg called their fiducial switch.
Imagine the switch as a QR code. The camera sees it and can project augmented reality on top. But the fiducial switch took this idea to the next level. It asked, What if you were to split this digital code into two images? Alone, they’d be meaningless to a computer. Assembled, they’d be information. So the fiducial switch is a sort of on/off controller for digital information in real space. In Berg’s final, most realized concept, we see the potential. A very dumb object–a mere chunk of plastic with some springs–becomes a cloud-connected media player. Ultimately, Berg asks, “What if subscriptions to digital services were sold as beautiful robot-readable objects, each carved at point-of-purchase with a wonderful individually generated pattern to unlock access?”
More info here.
At CES, mobile communications specialist Qualcomm has announced a development platform due for release in the second quarter of 2013 which will enable Java developers to write applications for the Internet of Things. Qualcomm’s designation for the Internet of Things is the “Internet of Everything” (IoE), by which the company means to include applications in fields such as household and building automation, in which all devices are centrally controlled. US telecommunications group AT&T is also involved and will provide services for the new platform. Once the project goes live, developers will be able to test applications developed using the IoE platform on AT&T’s network.
The platform is based on Qualcomm’s Gobi QSC6270-Turbo integrated chipset, which includes the Gobi modem technology giving direct access to various 3G connections. This apparently means that no further processors or micro-controllers are required. The application environment is provided by Oracle’s Java ME Embedded 3.2, and the platform includes a number of new JSRs for IoE applications which should allow Java apps to write directly to the large number of the Gobi chipset’s IO and interfaces, including GPIO, I2C and SPI.
The use of Java Micro Edition for application development for the Internet of Things is no great surprise, at least since Java owner Oracle raised the prospect of new areas of application for Java ME at the last JavaOne event. Qualcomm and Oracle announced a collaboration in October 2012 around M2M applications. Oracle is looking to expand the embedded version of Java, used predominantly so far in Blu-ray players and set-top boxes, into mobile phones and new areas, such as micro-controllers in industrial control systems, home automation, sensors and machine to machine systems.
More info here.
In its early days the internet was seen simply as a way of transferring data across large distances but it is now playing an ever increasing part in our lives.
David Reid reports on what is seen as the next big frontier for the web – called the internet of things – allowing you to use your smartphone to control your home heating, pay for parking and even monitor your own fitness.
The report is here.
In conjunction with ACM/IEEE International Conference on Software Engineering (ICSE) May 18-26, 2013, San Francisco (USA)
By acting as the interface between digital and physical worlds, wireless sensor networks (WSNs) represent a fundamental building block of the upcoming Internet of Things and a key enabler for Cyber-Physical and Pervasive Computing Systems. Despite the interest raised by this decade-old research topic, the development of WSN software is still carried out in a rather primitive fashion, by building software directly atop the operating system and by relying on an individual’s hard-earned programming skills. WSN developers must face not only the functional application requirements but also a number of challenging, non-functional requirements and constraints resulting from scarce resources. The heterogeneity of network nodes, the unpredictable environmental influences, and the large size of the network further add to the difficulties. In the WSN community, there is a growing awareness of the need for methodologies, techniques, and abstractions that simplify development tasks and increase the confidence in the correctness and performance of the resulting software. Software engineering (SE) support is therefore sought, not only to ease the development task but also to make it more reliable, dependable, and repeatable. Nevertheless, this topic has received so far very little attention by the SE community.
SESENA13 aims to attract researchers belonging to both the SE and WSN communities, not only to exchange recent research results on the topic, but also to stimulate discussion about the core open problems and to define a shared research agenda. The workshop welcomes both research contributions and position statements. The former will foster in-depth technical discussions of novel results with an audience composed of both SE and WSN researchers. The latter will provide the opportunity for presenting open problems, provocative views, or previously unexplored ideas in an informal fashion. To foster discussion, SESENA13 will also host a special “speakers’ corner” session composed of impromptu presentations where attendees (including those without accepted papers) will have the opportunity to present their own views in very short segments (e.g., 2-4 minutes).
Paper submission : February 7, 2013
Author notification : February 28, 2013
Camera Ready Version : March 7, 2013
Workshop : May 21, 2013
More info here.