Wednesday, June 18, 2008

A Rendezvous of Content Adaptable Service and Product Line Modeling

Seo Jeong Lee1 and Soo Dong Kim2 -- PROFES 2005

They propose a service decision modeling technique for content adaptable applications

Michael Dertouzos [8] four fundamental forces envisioned in pervasive computing:
1- Natural Interaction
2- Automation
3- Individualized information access
4- Collaboration


Taxonomy of variability can be seen as below

The content adaptable service decision process
  1. Define System Architecture
    1. embrace contextual change
    2. embrace ad hoc composition
    3. recognize sharing as the default
  2. Define the variation points and variants
    1. Context is profile of network, device, user, service
    2. for each of the above profiles we may think of a variation point
  3. Define the dependencies between variation points
  4. Define the dependencies between variants
  5. Define the strategy of negotiation
    1. It depends on the domain, service, and application
    2. The decision value of the strategy should be one_of or in_the_range_of variant values.
  6. Select the adequate algorithm or module
    1. A QoS algorithm or something similar can be used to choose the require set of components and requirements based on the information that are fed to the system by the system designer.

Tuesday, June 17, 2008

Synergy between Software Product Line and Intelligent Mobile Middleware

Weishan Zhang and Klaus Marius Hanse 2007

current mobile middleware is designed based on "one-size-fits-all" paradigm lacking flexibility for optimization, customization, and adaptation.

They use the concepts of Frame-based techniques and its XVCL((XML based Variant Configuration Language) to define and configure points of variability.

[4] seems to be interesting to read in this paper.

They consider two major problems with the current mobile middleware applications:
  1. Monolithic structure: Specialized optimization and customization might be required
  2. Ontology evolution has not been addressed in the current ontology based middleware
They use service oriented architecture to connect different pieces of their services together. This actually imposes performance overhead to the system which may considerably degrade the execution and specification of their system.

  • Configuration is done as early as possible
  • Frame based ontology management and aggregation mechanism can run both on J2ME and J2SE
  • Ontology evolution is more than the management of the ontology itself
  • Flexible template capabilities for XVCL
They use racerpro as their main means of reasoning over the ontology.

Frame-based Ontology_Java Processing (FOJP)
  • Bridging the OWL ontologies to Java classes by providing mappings
  • Management and handling of ontology evolution
  • Managing the update of agent definition, including the agent belief, goals, actions, and plans
A context ontology is devided into two parts, the parts that change more frequently and the parts that stay more or less the same for a longer period of time. Then XVCL commands are used in a meta-ontology to bridge these concepts and provide an aggregation of all these classes of ontologies.

Ontology evolution involves two phases
  1. meta-ontology development
  2. other meta-artifacts for the mobile middleware including the code components

Monday, June 16, 2008

Supporting Pluggable Configuration Algorithms in PCOM

Marcus Hadnte, Klaus Herrmann, Gregor Shiele, Chrisitan Becker

The authors have defined the initial definition of PCOM in [1]

Devices have component containers that mange the hosted components on the device. The functionalieis are offered as contracts in terms of interfaces. Also it can have resource requirements that a component must meet in order to use a component. For applications there is an application anchor which is possibly the starting component (root) for an application.

configuration algorithms control the chaining of components.

The goals for PCOM are
  1. Resilience failure
  2. Efficiency & minimalization
  3. Simplicity & Extensibility
In the new design the container is broken into parts
  1. the application manager: starts the anchor but it restarts it to the very beginning point whenever needed, which is quite stupid
  2. assembler: implements the functionality of computing valid configurations. Assembler can launch different configuration algorithms depending on the situation.
  3. component container: are actually the providers of components for the other two components in the system.

Appplication Data Srevices: Making Steps Towards an Appliance Computing World

Andrew Huang, Benjamin Ling, John Barton, Armando Fox

The paper introduces to main dilemmas in using devides
  1. They are more complex
  2. There are too many features
The vision of the paper: "An appliance computing world is one in which people move data effortlessly among artifacts to accomplish a variety of tasks"

The paper introduces a set of princtiples and attributes for any ADS system
  • At1: People move data using concrete syntax. Like "Post the picture to my wall"
  • P1: Bring devices to the forefront: computers and devices are invisible into the physical infrastructure (Mark Weiser's vision)
  • A2: Devices are simple, single purpose appliances: This is not true cause the users have shown acceptance of devices with more complex capabilities. For example turning cellphones to cameras is not something being rejected by the users
  • P2: Keep the number of user controllable features on devices to a minimum: This should be correct as it provides better manipulation and control over the device. It should provide simpler user interfaces as well. It shouldn't be too complicated or anything at the end for the user to be used.
  • A3: People perform a variety of traditional tasks, as well as a new set of advanced tasks with their devices. The functionality to perform highlevel tasks can be placed on users' PCs but be kept hidden from the user.
  • P3: Place the software required to accomplish tasks in the network infrastructure
Their implementation of the ADS system sends request as tuples (userid, command-tag, data) with userid and command-tag used for the following purposes:
  • Application Selection
  • Access Control
  • Other service features
They have three parts to the architecture
  1. Data Receive Stage
    1. Role: Deals with device heterogenity
    2. It handles all the device connection requirements but is very poor for scalability. It becomes a single point of failure for the system as well.
    3. It relies on a stateless Access Point (What is stateless I don't really know) amd am aggregator enables extensibility of the Access point by adding new device features
    4. Aggregator is actually the point of conflict as at that point all the integration between all the access points and the required input data for the application control phase happens.
  2. Application Control Stage
    1. The data is collected to create a chain of components that satisfy the application. It is not clear how this set of data is monitored to satisfy the requirement of the applications and components and how others should be aware of these requirements when developing components.
    2. Command Canonicalizer
      1. Allows having simple user interfaces
    3. Template Database
      1. Minimizing device configuration
    4. Dataflow Manager
      1. Coordinates data input bu the user: How this required data is specified?
  3. Service Execution

Sunday, June 08, 2008

A Reflective Framework for Discovery and Interaction in Heterogeneous Mobile Environments

Grace, P., Blair, G.S., Samuel, S.: A reflective framework for discovery and interaction in heterogeneous mobile environments. SIGMOBILE Mob. Comput. Commun. Rev. 9 (2005) 2-14.

a component is “a unit of composition with contractually specified interfaces, which can be
deployed independently and is subject to third party creation” [14].

Three layers
  • concrete middleware section
    • binding framework
    • service discovery framework
  • abstract middleware-programming model
  • abstract to concrete mapping
lookup operation across different discover protocols.

Problem: How to find which discovery protocol is in use?
  1. Having a fixed point of agreement
    1. Not all protocols can gurantee to use this technology.
    2. The higher level mechanisms may change
  2. The approach that they promote is Cycle and See
Interesting component design for OpenCom

Toward Wide Area Interaction with Ubiquitous Computing Environments

The overall idea: to unify abstractions exposed by existing ubicomp systems to provide a coarse gained interface for application interfacing.

Two impediments to wider deployment of ubicomp environment
  1. supporting users and applications withing single administrative or user domains
  2. lack of a shared model for ubiquitous computing
The considered model for the initial version of web service based middle ware:
  • Environment Model
    • Through service discovery
    • Through a component that handles more complex models of the environment
    • Related aspects
      • Environment State
      • Environment Meta-state
      • Environment Implementation link: the set of software components
        • Event sources
        • Context sources
        • Services
        • Entity Handler
  • Entities
  • Context
    • Values
    • High level inferred context
  • Services
  • Entity relationships
  • Events
  • Data or content
Environment profiles: to provide semantic enrichment
  • entities
  • services
  • context
  • events
  • content
--------------------
Thoughts:
The paper proposes a bottom up integration of services the functionalities of middlewares with the requirements of an environment. The object in an environment are classified as discussed and the relations between them are established. Based on the requirements of users, rules are defined in the form of Jena rules that can extract the concepts of integration from ontologies and identify what components can be used for what services. The ontology preserves the relationships between the entities, their contexts, and the components.

The reasoner then identifies the set of appropriate components that have to be composed in order to provide the right combination for the request of the environment to be processed.

The problem with their approach is that they have chosen a bottom up approach to bind the components to the concepts of user needs. This makes the whole design very much dependent to the way the composition has been defined in the ontology, thus in case a relationship between the components changes, the whole design will lose its validity and the whole ontology needs to be changed.

On the other hand, this doesn't provide any possibility for component reuse cause the design is bottom up which means the components drive the design as opposed to having the design driving the components. So, it is not possible for the modules to be reused, but instead the whole system can be replaces, making its scalability absolutely questionable.

furthermore, for each new system a new integration model should be defined and thus a whole rework at the level of system design also should be done. so this new architecture doesn't solve the problem of adaptability to the new domain, it just makes it uniquely possible for different systems in different domain to choose the same technology to connect to an environment. This is not the role of a broker tho, is it?