- IBM Develops Analytics Technology For Telcos
- A USB Hard Drive That Asks For Your PIN Before Allowing Access
- An Information Security Health Check-up For IBM Clients
- Enterprise Applications And Mid-tier Caching
- India Needs More Homegrown PhDs In Computer Science
- IBM: An Education Tourism Programme For IT Professionals And Students
The T-Engine platform for embedded systems development is a technology that might make the grand visions of ubiquitous computing a reality—provided society’s mindset also surges ahead as fast as technology does!
Arguably, technology is now progressing much faster than a decade ago—but how? In more ways than one, this can be attributed to the shift towards openness and standardisation. Today, the time taken to bring a tech product to the market is remarkably low. Designers do not necessarily have to create a product from ground-up. Instead, they can put together existing ‘intellectual property’ components procured from various sources, and within weeks the product is all set to invade the market. Developers can pick and choose from pre-fabricated software modules, ‘mash’ them up, and serve up a lovely application to users within days.
This, however, would not have been possible in the absence of a common platform on which the many components could be built, linked and orchestrated. If this platform is open and standardised, the possibilities get richer—GNU/Linux and the wealth of free and open source software is a quintessential example of the advantages of openness.
The T-Engine platform extends the same logic to the embedded systems world. By providing an open, standardised real-time operating system (the T-Kernel), hardware (the T-Engine board), and object format specifications, the T-Engine project enables the creation, distribution and use of varied middleware—and consequentially, the cost-effective and quick development of embedded systems. As a common platform, the T-Engine syncs the work of chipmakers, hardware manufacturers, and software and systems developers, enabling them to bring products to the market faster than ever before.
With the dual benefits of security and standardisation, the open-sourced T-Engine makes a leap towards ubiquitous computing.
Ubiquitous computing—the very concept is exciting and scary at the same time. Imagine a future where everything, from traffic signals to the microwave oven and air-conditioner in your home, have ‘computers’ in them . These take orders, perform tasks and network with other computers that are ‘always-on’! The moment you cross the last traffic signal on the way back home, a signal is transmitted to your home, and the heater/cooler starts up, so that your living room is cosy when you enter. When you come within a few feet of your house, the gate ‘detects’ you by communicating with your car, your mobile phone or perchance even a chip in your outfit, and opens! When you wish to play music aloud from your mobile digital music player, it connects ad-hoc with the concealed speakers in the room and starts playing. No user-interfaces, menus, set-up, or hassles! The possibilities are endless.
But the moment you picture such an environment, several basic requirements and issues immediately surface. If embedded systems are to be spawned at a rate capable of filling every significant object in this world, there should be easily-reusable components and extreme cooperation between all players in the electronics and computing value chain. This, in turn, needs an open, standardised platform to develop these embedded system components. Plus, if there are networked computers everywhere, the chances of identity theft are high—for instance, if somebody is able to electronically ‘disguise’ themselves as you, all your property falls within their control! So, embedded systems have to be crack-proof. The T-Engine platform, spun off from the legendary TRON project, is a potential means of overcoming these concerns.
The T-Engine has been developed, maintained and promoted by the non-profit T-Engine Forum (www.t-engine.org). The forum was created and is led by Professor Dr Ken Sakamura—a ‘mover and shaker’ in the embedded systems space; he and his TRON (The Real-time Operating system Nucleus) project are inseparable from the embedded systems legacy of Japan. The first big success of the TRON project was ITRON—the specifications for a real-time embedded operating system published by the TRON Association. ITRON went on to become the de-facto standard for the real-time operating systems (RTOS) used in many embedded system products made in Japan (mobile phones, digital cameras, printers, photo-copiers, fax machines, engine-control units of Toyota cars, etc.)
The T-Engine Forum can be seen as representing the current research results of the TRON project. It encompasses a wide canvas of new-generation technologies to fuel ubiquitous computing, such as CPUs, operating systems, RFID tags, the software architecture to use the tags in today’s world so that these new systems co-exist with legacy systems, and so on.
Beyond just RTOS standardisation, the T-Engine Project takes a more holistic approach to embedded systems development—one that extends to standardisation of hardware as well. The ‘T-Engine’ typically comprises several open technologies, which we will look at in this section.
The T-Engine architecture is the hardware standard. The T-Engine boards are classified based on their applications and size. By choosing boards matching the size of the finished products, it is possible to swiftly build mobile devices and systems that can be easily ported to various CPUs.
The T-Kernel is an open source, real-time embedded operating system that runs not just on T-Engine hardware, but can be easily ported to other CPUs as well. The T-Kernel is more or less the next generation of the ITRON—in fact, it was recently reported that Toyota intends to move to the T-Kernel platform.
The standardisation of both the hardware architecture and the RTOS makes it possible to distribute software resources, middleware and device drivers that are independent of the CPU architecture. The T-Engine project also includes T-Dist—a platform for secure distribution of middleware and applications.
The T-Kernel source code is open-sourced under the T-License. Unlike other popular free software licences like the GPL which require resulting products (such as middleware and applications) also to be ‘open’, the T-License imposes no such restriction on those using it for developing embedded systems, thereby lending itself to several commercial applications.
Let’s not forget security—a major concern in embedded systems! The T-Engine also includes eTRON, a highly-evolved network security architecture developed by the TRON project. eTRON is based on a tamper-proof chip that encrypts information in a way that can be read only by those authorised to do so. As explained in the forum literature, “The eTRON sub-architecture is intended to prevent tapping, falsification, and disguise of malicious users, so that electronic information can be safely delivered to the other party through insecure network channels such as the Internet.”
If you look a little deeper at the concept of ubiquitous computing, it is more than merely building embedded systems that can easily network with each other. It is also about developing a system of identification through which these devices can recognise one another. The Ubiquitous ID (uID) efforts of the T-Engine project address this aspect of everywhere computing.
Mohit Sindhwani, director—Technology and Training at Viometrix (www.viometrix.com) in Singapore, was one of the first people outside Japan to port the T-Kernel to another processor. “I was inspired to look at the RTOS as it was built on the solid foundation of ITRON—arguably the world’s most-embedded RTOS (in terms of shipment units). At the same time, it was also attractive since it was open source (as is Linux), but was coming specifically from the embedded systems world—as against Linux/Windows CE, which came to the embedded world from the PC/desktop area. This meant that it was better suited for all sorts of applications, including hard real-time applications (both Linux and Windows CE were not able to claim that for quite a while),” says Sindhwani. “The T-Kernel is currently the only open source RTOS that I know, which can support middleware and applications on a very wide set of processors.”
He immediately adds that focusing on the T-Kernel alone is like missing the forest for the trees. It is the platform-based approach that makes the T-Engine a superior technology choice.
“Platform-based computing is essential for rapid development and deployment of products with shorter design cycles. Way too much attention is focused on the T-Kernel (the RTOS) and very little on the T-Engine (the platform). Platform-based computing is critical in meeting stringent Non-Recurring Engineering (NRE) cost constraints and reducing the Time-To-Market (TTM),” he says. “In this sense, the T-Engine offers a plug-and-play approach to building systems, by rapidly combining hardware and software middleware to enable very rapid prototyping. For this, the platform has extensive support for expansion. This will be crucial in the future, if designers want to roll out multiple products in a year. Other operating systems do not have such platforms that can support a wide set of hardware options.”
Professor Sakamura stresses the ‘open and free’ nature as the main reason in favour of the T-Kernel—he feels there is no other RTOS that makes the entire source code available for free. “Not only that, with the backing of the 500 members of the T-Engine Forum, we share the experience of the members with the development community and users as well. The T-Engine Forum is the world’s largest organisation of its kind, and its activity is supported by many semiconductor companies which make CPUs for the embedded systems market: MIPS, Renesas, NEC, Fujitsu, etc,” he says. “As for the T-Engine, we can claim that the T-Engine family supports all the important embedded CPUs in a uniform architecture. Again, not many companies can claim the same. With the help of CPU vendors, we have produced working boards that can be used for development and prototype production.”
As is evident from Sindhwani’s arguments in favour of the T-Engine, one of the biggest advantages of a platform-based approach is the ability to create, distribute and use middleware—enabling one to put together new products and applications quickly, using already available components.
The Tokyo-based YRP Ubiquitous Networking Laboratory (www.ubin.jp) is an active member of the T-Engine Forum. Chiaki Ishikawa, senior researcher/international liaison at the UNL, and Nobuyuki Kashiwa of the T-Engine Forum Secretariat, enumerate some significant middleware.
“There are more interesting applications than middleware. I wonder why myself… sometimes it may make more economic sense to earn money by building application systems with in-house or proprietary middleware quickly for customers, than to sell such middleware. Just a theory, though,” opines Ishikawa.
You can read about several middleware, applications, prototypes, and ubiquitous computing feasibility studies that use the T-Engine, T-Kernel and uID at http://www.t-engine.org/pdf/T-Engine_uID_eTRON_e.pdf, but in this article, let us take a quick look at two recent applications highlighted by Ishikawa.
“I hope the name of ‘Denso’ rings a bell, at least for people in the field of embedded systems for automobiles,” says Ishikawa. Last year, Denso Corporation created a car navigation system using an RTOS-based on the T-Kernel Standard Extension. This was Fujitsu Ten’s brand, Eclipse.
Another application worth highlighting is the use of the T-Engine platform for ubiquitous computing at the Ueno Zoo in Tokyo, Japan. Ishikawa explains the experiment: “We used T-Engine/T-Kernel based Ubiquitous Communicator (UC) terminals for guidance in the zoo, and to offer information about animals. We also experimented with passive RFID tags at the front of the cages, active tags for wider guidance, and so-called information kiosks placed in the park, so that people could download content even in the absence of Wi-Fi. It was a pure experiment initially for two weeks, from 15th to 30th of November 2005.”
However, the response from children during the experiment was so positive that the administration of the zoo decided to buy 100 UC (Ubiquitous Communicator) terminals for use as an official guidance system in the zoo—the system has been functioning since October 2006. Ten terminals are kept as a back-up in case of malfunction, while 90 terminals are handed out to visitors, who reserve the terminals in advance via the zoo’s Web page.
The original experiment at Ueno Zoo was part of the larger Tokyo Ubiquitous Technology Project (http://www.tokyo-ubinavi.jp/index_en.html)
The T-Engine/T-Kernel/uID framework is a de-facto standard by virtue of being free and open. The concrete specification documents related to the T-Engine board and the T-Kernel OS are freely available, and people are welcome to implement their own version based on the specification. The source code for the T-Kernel is also offered as a reference implementation.
In that sense, GNU/Linux is also a de-facto standard, as it is free and open source. But Ishikawa highlights a subtle yet important difference: “This approach is very different from, say, Linux, which also is a de-facto standard by being openly available. But there is no clearly-written specification of what is ‘Linux’ - and even at the system call API level, things are changing rapidly, to the chagrin of application/library developers and Linux distributors such as Red Hat. This is very different from the T-Engine/T-Kernel situation.”
That said, the uID architecture is being proposed as the basis of a Networked ID framework (a software application framework for using RFID tags in networked application systems) that is being discussed at the International Telecommunication Union (ITU).
Also, to improve the reliability of T-Kernel ports to various hardware platforms, the forum has prepared a test suite to check the completeness of porting. The T-Engine Forum members have access to this test suite under arrangement. This, too, will further improve the quality of T-Engine/T-Kernel systems in the future.
Further technological developments in the T-Engine platform are sure to come in the future. But in addition, one of the main agendas of the T-Engine Forum is to spread awareness about the platform and its usage.
Imagine computers everywhere, and instantly your privacy and security seem to be at risk. The technology to override the privacy issues already exists, according to Professor Sakamura, but these cost more than what an average consumer can afford, and therein lies the problem.
“So we must educate the general population by identifying privacy issues, laying out the technology available to solve these problems, and then discussing the associated cost. In our scheme, we allow many different types of tags to be used. We classify our tags based on their capability for security, including privacy protection (actually called identification prevention mechanism in our tag scheme),” he says.
Usually, tags without such capability (for example, barcodes, or two-dimensional optical codes) are very inexpensive, but there is no security. There are tags that are essentially smart integrated chip (IC) cards, and some of them can be re-programmed to prevent accidental leakage of the stored IDs. But obviously, they cost more than simple optical tags or simple RFID tags.
In the scheme charted by the TRON project, system developers and users can select the proper class of tags that meets their budget and security requirement (including privacy concerns). Professor Sakamura remarks that this multi-tag approach is very different from the ‘single tag’ approach of others, most notably that of EPCglobal (www.epcglobalinc.org), in the last few years. The availability of a variety of tags in the TRON scheme is a very practical approach to solve real-world problems.
By having produced the OS (formerly ITRON specification OS, now T-Kernel), hardware platform (T-Engine), various tags certified by the uID centre (under the T-Engine Forum), and the uID architecture and its implementation managed by the uID centre, the TRON team believes they can make the dreams of ubiquitous computing come true.
“Already, we have performed many local feasibility studies with the help of the regional governments in Japan: Tokyo, Aomori, Shizuoka, Kumamato, Kobe, etc. These include food traceability, drug traceability, sight-seeing guides, and road navigation systems for the handicapped,” says Professor Sakamura.
Regarding a future where computers are everywhere—we seem to be there in terms of the availability of technology, yet we don’t seem to have arrived! Professor Sakamura opines that the challenges in the way of ubiquitous computing are not merely technical, but also of a socio-economic, political and legal nature. He explains this by discussing an interesting Catch-22 situation.
“We have performed many ubiquitous computing experiments, such as ones that pursue the safety of food distribution, and drug distribution. By incorporating RFID tags and ubiquitous computing platforms that can obtain information from tags anytime, anywhere, we can improve the safety of food products and drugs sold in the market. For example, we can tell when, where, and who manufactured food products, and what kind of ingredients are inside. We can do similar things for drug distribution as well.
“We can even build, and have built, experimental refrigerators that have ubiquitous tag readers in it, so that whenever food is taken out or put in, it can warn if the expiry date for the food has already been reached! Now, when we approach commercial producers of refrigerators, they say, unless all the food products carry such tags, they don’t see the merits of such refrigerators—and so they don’t want to invest in building such refrigerators in large quantities.
“When we approach food vendors, they say, unless there are refrigerators with tag readers, they don’t see the merits of tagging food products, and so they don’t want to tag all the food products they produce every day.
“Ubiquitous computing is concerned with the infrastructure of the future and so there are issues that are not tied to technical topics alone. As the above example shows, there are socio-economical and even legal angles. For example, even if the technology exists to attach information to every interesting object in our surrounding, if a vendor intentionally attaches false information, then the result will not be good. We have seen people die due to mislabelled drugs and food additives. So we need laws and regulations to punish those who attach incorrect information to tags with ill intentions. And for that matter, we must educate the vendors to tag products with correct information irrespective of their intention, be it good or bad.”
Sakamura signs off, saying, “We need to make society aware of these non-technical issues that must be solved before ubiquitous computing is part of the social infrastructure—on top of the technology itself, of course.”