Sunday, October 18, 2009

Context-Aware Service Composition in Pervasive Computing Environments

Sonia Ben Mokhtar, Damien Fournier, Nikolaos Georgantas, and Val´erie Issarny

http://www.springerlink.com.proxy.lib.sfu.ca/content/38252l8676501424/fulltext.pdf

The paper presents a context-aware service composition based on workflow integration.

Context-awareness is a property of a system that uses context to provide relevant information and or services to the user where relevant depends on user's task

They use ontologies to define context and then use this information to validate context.

They want to enable the user to perform a task anywhere and any time. Networked services are described in OWL-S extended with context information -> prec0nditions & effects + context attributes.

Context-aware service composition is done in two steps:
  1. context aware service discovery provides a set of services that are candidate to composition
  2. context-aware process integration provides a set of composition schemes that conform to the task behavior
Matching and discovery of services is done based on the algorithm proposed by palucci in matching the inputs and outputs of user tasks and processes with the inputs and outputs of advertised services. what is important is the fact that there can be subsumption relationship between the advertised and the required services.

During the integration process, contextual preconditions and effects of service operations have to be taken into account, and second, global task's contextual requirements have to be checked.

Filtering of components happens based on the following criteria:
  1. Starting from the actual state of the path, the task's following symbols can not be reached in the global automation
  2. The simulated context does not fulfill the contextual preconditions of the incoming operations.
  3. some attributes of the simulated context do not meet the global contextual requirements of the user task.

A Tale of Clouds: Paradigm Comparisons and Some Thoughts on Research Issues

Lijun Mei, W.K. Chan, T.H. Tse

http://ieeexplore.ieee.org.proxy.lib.sfu.ca/stamp/stamp.jsp?tp=&arnumber=4780718&isnumber=4780615

Preliminaries

Cloud Computing
  • Horizontal Cloud Scalability: connect and integrate multiple clouds to work together as one logical cloud
  • Vertical Cloud Scalability: improve the capacity of a cloud by enhancing the individual nodes in the cloud
Basically the above concepts talk about scalability in depth and scalability in breadth

Service Computing
  • to create a service composition, engineers may use specifications such as WSBPEL
  • to carry out workflows, webservices or other types of services might be used
Pervasive Computing
  • embedded in constantly changing computing environments
  • a well developed environment will enable the pervasive software to work eveywhere without extra effort
  • Environmental features are used to understand and react to the users. These environmental variables are referred to as context and are collected using different sensors and information in the environment
Comparing Cloud with Pervasive and Service Computing
  • Service Computing is good in providing functionality and providing flexible services
  • Pervasive Computing enables users to use software everywhere and provides slef adaptivity with respect to the environmental contexts
Cloud computing needs both functionality modeling and context sensitivity.

In terms of IO Cloud Computing is closer to Service Computing
In terms of storage, Cloud Computing seems to be closer to Pervasive Computing

Comparison of IO for cloud, service, and pervasive computing


Comparison of storage for cloud, service, and pervasive computing
Comparison of calculation features for cloud, service, and pervasive computing

How do computing entities plug into the system?
  • service computing: registration and discovery of srvices
  • pervasive computing: mobile computing entities join and leave the environment
  • cloud computing: Applications can be entity aware to plugin heterogeneous computing entities. New computing entities can be added or should be added to the system dynamically and on the fly.
How do computing clouds store and access large-scale data?
  • Pervasive computing: mobile entities store their data in the environment
  • Service computing: usually the amount of data stored is negligible and services are more often stateless services which do the calculation but do not deal with storing information
  • Cloud Computing: there is finite amount of space for storage on the cloud too, so cloud systems may need to share data or may need to provide some sort of inter-cloud communication in order to scale better and transfer some of the data to other clouds
How does a computing cloud become adaptive to both internal and external changes?
  • Service computing: environmental changes, evolving quality of services,
  • Pervasive computing: quality of mobile entities involved
  • Cloud computing:
  • How does the environment change for a cloud?

An Architecture for Non Functional Properties Management in Distributed Computing

[[ CHECK REFERENCES ]]

Pierre de Leuss1, Panos Periorellis1, Theo Dimitrakos2 and Paul Watson1

http://www.cs.ncl.ac.uk/publications/inproceedings/papers/1149.pdf

Three categories for Grid
  1. computational grid
  2. data grid
  3. service grid: instead of providing computational or data resources enables sharing of specific functions defined and exposed as services
Cloud Computing: Resources come from the cloud, a public network, rather than a specific identifiable system.

The rational behind cloud computing:
  • the underlying complexity of the system and their characteristics should not only be hidden from the underlying users but for the most part to the technical users as well. Amazon Simple Storage Service (SimpleDB) is a webserivce providing storage capabilities.
  • It is not only about computation and data
Potential Research challenges
  • adaptability in response to changes in the nonfunctional requirements of the system
  • From changes in internals of components to external changes
  • Reaction on Message interceptions received by the infrastructure
  • Safety and Security of the profiles
Interesting points
  • nonfunctional properties management
  • rapid adaptation
  • dynamic composition
  • distributed system integration
IBM's perspective on autonomic computing
  • Self-configuration: adapts automatically to dynamically changing environments
  • Self-healing: system discovers, diagnose, and reacts to disruptions
  • Self-optimizing: systems monitor and tune systems automatically
  • Self-protecting: systems anticipate, detect, indentify and protect themselves.

Cloud Computing – Issues, Research and Implementations

Mladen A. Vouk
Department of Computer Science, North Carolina State University, Raleigh, North Carolina, USA

http://loveni.name/clover/Cloud%20Computing%20-%20Issues,%20Research%20and%20Implementations.pdf

In the context of cloud computing the key question should be whether the underlying infrastructure is supportive of the workflow oriented view of the world.

Characteristics of a cloud environment
  • support large number of users ranging from very naive to very sophisticated
  • support construction and delivery of curricula for these users
  • generate adequate content diversity, quality, and range
  • be reliable and cost-effective to operate and maintain
In the context of the VCL technology an image is a tangible abstraction of the software stack.

Service Composition and Provisioning
  • sample and combine existing services and images
  • create new composites, update them, etc.
  • workflow aggregation and automation
Cloud computing research issues
  • image and service construction
  • Cloud provenance data (process, data, workflow, system or environment)
  • optimization
  • image portability
  • security
  • utilization

Tuesday, October 13, 2009

Calling the cloud: Enabling mobile phones as interfaces to cloud applications

http://people.inf.ethz.ch/oriva/pubs/riva_middleware09.pdf

Sweet Home 3D App: http://www.sweethome3d.eu/download.jsp

The discussion is that the applications are executed either on the mobile phones or on the server. However, there is a need to split the application between the two.

Application profiling is done by providing a consumption graph.

Measured parameters:
  • The consumed memory
  • The data traffic generated both in input and output
  • The code size
The consider the amount of transferred data as the major factor in creating the consumption graph.

Problems with the approach:
  • The instrumentation is done manually which requires access to the source code for the bundles
  • The focus is only on the user interface because they are considered as more suitable resources to be moved to the cloud
  • Also they argue that the hardware requirements vary from phone to phone and thats why they are filtering that parameter of CPU usage out. And thus, the bundles CPU cost is omitted.
  • The developer marks bundles as movable and nonmovable. What is the reason for a developer to classify bundles as movable and nonmovable? how do you know that the classification is correct and that it works properly?
  • They have assumed that every bundle exposes only one service and not more. So, the application does not work with more services
In the consumption graph, every vertex is a bundle and every edge is a service dependency.

Bundle Characteristics:
  • type: movable or nonmovable
  • memory consumption
  • code_size
  • in: the amount of input data to a bundle B
  • out: the amount of data sent out of a bundle B
The modularity considered is at the functional level and not at the class or function level. But whether there is better approach to think of modularity is another problem that potentially shoudl be addressed.

The distribution happens only between two nodes and not more.

The optimal cut maximizes or minimizes an objective function and satisfies a phone's resource constraints.

  • k: bundles running on the mobile device
  • t: bundles on the mobile device with dependency to bundles on the server
  • alpha: the bandwidth
  • fij: how many times communication between the two bundles happens!!! (weird idea)
  • beta: the capacity of the coomunication link + the installation overhead! ( How does it reflect on the installation overhead when it is a completely client dependent parameter!)
  • also the proxy cost represents how much effort is required for the proxies to be created in order for the proper communication to happen between the client and the server

DR-OSGi: Hardening Distributed Components with Network Volatility Resiliency

http://people.cs.vt.edu/~tilevich/papers/DR-OSGI.pdf

  • A clear exposition of the challenges of treating the ability to cope with
    network volatility as a separate concern that can be expressed modularly.
  • An approach for hardening distributed component applications with re-
    siliency against network volatility.
  • A proof of concept infrastructure implementation|DR-OSGi|which demon-
    strates how existing distributed component applications can be hardened
    against network volatility.
They use R-OSGi as the base for their system and protect against network volatility.

Scenarios
  1. A remote service becomes unavailable|
  2. A temporarily unavailable remote service becomes available again
Case Studies:
  1. Log Service
  2. UserAdmin Service
  3. Distributed Lucene
  4. DNA Hound