Agility relates to responsiveness to change. Many SOA pundits claim that agility is a prime reason to adopt SOA. Some tend to classify agility into business, IT, process, and so on to claim the influence of SOA. Is there a good measurement model for business or IT agility? From a business perspective, one can measure:1) time-to-market new solutions/products/services2) sales from new or improved solutions/products/services3) growth in profitability4) scalability and unit cost5) time-to-absorb M&AAlthough the measures may be used to declare victory, a good measurement model of agility that illustrates causal factors still helps. Such a model relies on business relevant scenarios of change. HP developed agility index as an important feedback measure of SOA success. Are there other ways similar or better?
It has been a while since I decided to break the prolonged silence and restart writing when I do find time. Perhaps the prolonged silence means someone is either busy (or lazy :-) depending on how the day goes by.
A lot has been written already about many of the topics listed by me nearly two years ago. There is plenty of advice available on the web on different computing paradigms. Despite the prolific advice, many in the business world still wrestle with the fundamental issues.
1. Value: How does the particular computing paradigm help?2. Use: How can one incorporate or adopt the computing paradigm?3. Practice: How can one maturize and sustain the paradigm for a longer period?
Consider, for example, the topic of "governance" within an enterprise these days. Here are some good prefix terms for this word - enterprise, corporate, business, enterprise architecture, IT, information architecture, SOA, services, data, security, project, and web2.0. Do we really need these many? Do we require to read a 2-inch thick binder to understand these? May be or may be not. Is there a way to connect these dots that makes coherent sense? Something that makes you say "Hmmm!"
April was a month of tours and travel, visiting different places, and bringing home interesting learning experiences and perspectives. Speaking of learning, I have noted some fascinating sites that deal with human-machine interaction in a different manner. Most GUIs are designed to inculcate task-centric behavior where users often learn to navigate a plethora of screens specific to the task at hand and perhaps master it. An interaction between a human and a system is usually disfluent and cannot be compared to an interaction between two people. In an attempt to improve the interactive experience, many research institutions and firms have begun to entertain the notion of digital avatar.
Well, what is a digital avatar you say? It is a design metaphor that is used to represent a real life human being. Avatars come in 2D and 3D models often with animation. Most people know these from Games/Entertainment Industry. They activate on initial page display, react to user inputs such as mouse clicks, keywords, audio commands, and natural language queries, and even act natural with human like gestures. Novel business uses of these avatars include support diagnostics, on-line synthetic CSR, customer training, and e-learning.
The Association for Computing Machinery (ACM) has named the Danish Computer Scientist Peter Naur, the winner of the 2005 A. M. Turing Award. The award is for Naur's pioneering work on defining the Algol 60 programming language paving way to many other modern programming languages. Incidentally, the Backus-Naur notation for defining syntax of a program is named after Professor Naur.
Recognition may be viewed as a sub-problem of discrimination and/or classification. Many technical solutions to recognition problems depend on proprietary set of feature vectors that collectively help discriminate one sample from another. There are many examples of recognition technologies in use today. Examples include biometrics and other applications involving voice (or speech), image, scene, gesture, smell, and/or video. Remember the Bronstien twins who cracked the 3D-facial recognition problem in 2003. They developed routines that could classify themselves apart successfully. A recent experimental example of this comes from MyHeritage that takes your picture to match up with look-alike celebrities. A similar application called HNeT from AND Corporation tracks and recognizes up to four individuals simultaneously in real-time. It can recognize a face even when an individual ages. Another company is Identix provides face recognition software that can spot an individual in a crowd. Yet another example comes from Riya intended for photo sharing and searching. It can spot an individual from a given set of photos.Searching (Google), Copying (Xerox), Classification (Yahoo), Summarization, Conversion, Assimilation, and Distribution have wide range of applications that support many businesses today. So, what do you think?
Software firms such as Google and Microsoft have been seeking to stay close to consumers by providing many innovative and inexpensive tools that empower individuals to explore creative ways of demonstrating power-of-one commerce. Here is the latest example from Google: Official Google Video Blog.
Widespread adoption of such technologies, both in developed and developing countries, provides sufficient scale for a compelling business advantage. Imagine a modern, inexpensive, low power, ruggedized consumer device with a) an electronic paper display b) an open-source OS platform,c) a utility tool chest,d) a secure, self-managed, and bandwidth preserving optimized wireless mesh technology, and e) a host of multimedia and multimodal capabilities. Assuming the world of digital commerce, the utility value of such a device becomes apparent to those intending to deliver services to a digital community. Examples of such visions include the universal personal appliance - H21, One Laptop Per Child, and hybrid/ruggedized Tablet PC. While no one can portend the unhealthy consequences associated with the digital communities in future, realization of the latter may be closer than we think.