Monday, December 28, 2009

Wishbone: Profile-based Partitioning for Sensornet Applications

Two major problems with application partitioning:
  • Heterogeneity
  • Decomposition
The requirements for wishbone applications
  1. Streaming dataflow model: the model should be a dataflow graph
  2. Predictable input rates and patterns: because they use profiling they need to set this constraint
The front-end creates a dataflow graph. The backend performs graph optimization and reduces work functions to an intermediate language that can be fed to a number of code generators.

namespace is used to logically define the distribution of the code, i.e., the code that can be distributed, not the code that necessarily needs to be distributed.

if the code to be placed (logically) in a node is stateful, the state of the stateful operators should be replicated on the node too.

Stateful server oprators can not be moved to the network, however, stateful node operators can be brought to the server.

The system considers two modes of conservative and permissive where in the conservative mode the stateful nodes are not pushed to the server but in the permissive mode, the stateful operators can be pushed to the server, in case the application is capable of dealing with data loss.

For dataflow, the Scheme compiler executes the code during the compilation to measure the data flow, producing platform independent data rates.

Once partitoned, the partition is executed within simulated or real hardware to measure the cpu foot print for the partition. Timing statements are placed at the beginning and end of each operation. The timestamp can help with extracting the memory footprint for each piece of the code.

Cost is measured using Cost = aC + bNet

The ILP algorithm used is minimum cost cut for partitioning the graph

Wednesday, December 16, 2009

Dynamic Function Placement for Data-intensive Cluster Computing

Application partitioning is difficult because of :
  • variation in application behavior
  • variability in resource availability
  • availability in workload mixes
Effective use of cluster resources require
  1. load balancing
  2. proper partitioning of functionality among producers and consumers
Function placement is done in abacus only based on black box monitoring removing the burden from the programmers to worry about function placement.

Abacus consists of a programming model and a runtime system. In the abacus programming model, the programmers need to define their components as explicitly migratable functionally independent components or objects.

Anchored elements need to be explicitly defined in the graph of the application. I think this is required because when it comes to modeling the grapho for the application, these components should be makred properly.

Abacus components:
  1. Migration and Location Transparent Invocation Component (Binding Manager)
  2. Resource Monitoring and Management Component (Resource Manager)

Resource Manager uses notifications to collect monitoring information (mointoring and profiling happens during runtime).

The best net benefit is calculated by the server in order to determine whether it is worth doing the migration (minimum requirements for doing the migration). Code Mobility and Dynamic Linking are sidestep in this model.

Mobile Objects are defined by the programmer.

Cluster characteristics critical for function placement:
  • Communication bandwidth between nodes
  • Relative processor speed among nodes
  • Workload characteristics (e.g., bytes moved among functions, instructions executed by each function)
-> Data Intensive Applications: those that selectively filter, mine, sort, or otherwise manipulate large data sets. Spread the parallel computations across the source/sink servers.

Programmable Storage Services. is what they consider as a potential alternative to Cloud when it comes to naming.

Difference between Coign and Abacus is that Coign relies on the profiling history of functions / components to make decisions, while Abacus tries to do it at runtime.

Equanimity dynamically balances the load between a single client and its servers. Abacus extends it to real world clusters, i.e., resource contention, resource heterogeneity, workload variation.

Dynamic adaptation of resource placement based on resource usage and availability.

The two applications used in Abacus:
  1. The file system
  2. The search application

Goals for Abacus:
  1. improve overall performance

Parameters measured:
  • Data Flow Graph
  • Memory Consumption
  • Instructions Executed per Byte
  • Stall Time

Sunday, December 13, 2009

The Coign Automatic Distributed Partitioning System

The problem:

The need to partition and place pieces of applications on different nodes. Considering the effort, repartitioning is not done frequently because of the required effort even though repartitioning may buy a lot of efficiency for the application.

Application reprofiling is supported based on the periodical profiling of the application and calculating the optimal solution.

The architecture for Coign:

  • The application is augmented with instrumentations for Coign using the binary re-writer.
  • The instrumented binary is run through a set of profiling scenarios (degrading application performance. inter-component communications are summarized)
  • The profile analysis engine combines component communication profiles and component location constraints to create an abstract inter-component communication graph (ICC).
  • Location constraints are obtained from the programmer, from analysis of component communication records, and from application binaries.
  • The ICC graph is combined with a network profile to create a graph of potential communication time on the network
  • The graph cutting algorithm: lift-to-front minimum cut

The set of components for Coign Runtime:


Instance Classifier is probably the most important part of Coign Runtime. This is probably the most important part of the profiler as it tries to identify similarities between instances and extracted profiles. They have listed the following classifiers which need to be further investigated:
  1. incremental classifier (Straw man classifier)
  2. Procedure Called-By Classifier (PCB)
  3. Static Type Classifier (ST)
  4. Static Type Called-By Classifier (STCB)
  5. Internal-function Called-By Classifier (IFCB)
  6. Entry Point Called-By Classifier (EPCB)
  7. Instantiated-By Classifier (IB)

The next step is correlating the profile of one instance with another instance based on similar resource usage and communication behavior. They have used instance communication vector.

The algorithm that it uses is the lift-to-front minimum cut graph cutting algorithm

Sunday, November 22, 2009

Capacity Leasing in Cloud Systems using the OpenNebula Engine

Borja Sotomayor, Rubeen Santiago Montero, Ignacio Martero Llorente, and Ian Foster
http://www.cca08.org/papers/Paper20-Sotomayor.pdf

The problem: is advanced leasing.
  • In the current models resources are allocated at the time of request.
  • resource requests subject to notrivial policies are not supported
  • capacity specification in advance is not supported
  • no support for variable resource usage
  • dynamic renegotiation of resource allocation is not possible
  • small cloud systems can benefit from queuing, priorities and advanced reservation

Approach: OpenNebula + Haiza

OpenNebula:
  1. Core: Manages the lifecycle of a VM + management and monitoring of the physical host
  2. Capacity Manager: Adjusts placement of VMs
  3. pluggable Virtualizer Access Driver: expose the basic functionality of the hypervisor
Hiza is a lease manager:
  1. leases in Haiza, hardware resource, software environment, and availability
  2. Supports
  3. advance reservation lease: request for resource at a specific time
  4. best effort leases: resources are assigned as soon as possible, queuing resources if necessary
  5. immediate leases: provisioned when requested or not at all

Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services as Computing Utilities

Comparing some cloud services:

Thursday, November 19, 2009

QCon SF 2009: Simon Guest, Patterns of Cloud Computing

QCon SF 2009: Simon Guest, Patterns of Cloud Computing
By Stefan Tilkov on November 19, 2009 2:38 PM | Permalink | Comments (0)

Similar Post: http://horicky.blogspot.com/2009/11/cloud-computing-patterns.html

These are my unedited notes from Simon Guest's talk about Patterns for Cloud Computing at QCon SF 2009.

* "This talk is about Jim, he has many questions about cloud computing…"
* 5 pattern of cloud-based applications
* Definition of cloud computing
* Different models:
o Applications must run on-premises – complete control, upfront capital costs
o Application runs at a hoster – lower capital costs, but pay for fixed capacity even if idle
o Shared, multi-tenant, pay as you go – pay someone for a pool of computing resources that can be applied to a set of applications
* Public Cloud vs. Private Cloud – private cloud useful e.g. for telcos offering this to their customers
* Windows Azure – compute, storage, management, based on 64bit Windows images
* SQL Azure - RDBMS
* .NET Service - service bus and access control
* [ed.: Who thinks of these names, and even more importantly, why doesn't Microsoft fire them?]
* Different models infrastructure (IaaS) vs. Platform as a Service (PaaS) as main paths
* Slide shows that MS offers a higher-level stack than Amazon - EC2 provides instance, Windows Azure model is a platform as a service model
* [Seems to me this is one of the major problems of Azure – it seems neither one or the other, as I would define PaaS as what GAE does, which is much higher-level than simply a Windows Server]

Pattern #1: Using the Cloud for Scale

* Shows how to scale up a Web app using more machines, load balancer, database partitioning
* A lot of work - a lot of money
* Designed for peak capacity, idle for a lot of time
* Much easier to let cloud vendor handle this dynamically
* Prerequisite for successful scaling in the cloud: having a queue to decouple web tier and backend
* Starbucks [of all possible examples! ;-)] as an example for queueing
* Demo: "PrimeSolvr" (Web 2.0 because it's missing the last "e")
* 3 takeaways: 1) core tenet of cloud computing: ability to scale up/down 2) understand how to communicate between roles and nodes 3) strategy for when to scale up and down

Pattern #2: Using the cloud for multi tenancy

* Simply approach internally: one application per customer - works only for small numbers
* Implications: Schema customizations, UI customizations
* 3 options for data in a multi-tenant environment: 1) share DB between customers 2) each customer gets a separate DB - hard to do on-premise, much easier in the cloud 3) fixed DB schema with customizations on a tenant-by-tenant basis
* Demo: ASP.NET MVC app using the HTTP host name to switch UI and DB Schema
* Takeaways: 1) Consider multi-tenancy first, hard to retrofit 2) Design considerations must include both data and UI specifics 3) Identity as a very important consideration, see MS Patterns and Practices paper on multi-tenancy ID

Pattern #3: Using the cloud for compute

* Popularized by MapReduce
* Apache Hadoop, Cloudera, Amazon Elastic MapReduce, Hadoop implementation
* Typical on-premise solution: very infrastructure-heavy, complex, expensive
* No explicit framework implementation on Azure
* Demo (inspired my MapReduce): Development Fabric (local execution environment), not using virtualization [similar to GAE environment]; next step is upload to Azure staging area, next level production
* Takeaways: MapReduce very visible, although can be hard to initally grasp, learn about existing implementations; MS academic effort: Dryad

Pattern #4: Using the cloud for (infinite) storage

* Problem: Affinity between hardware and data
* how does the cloud help? breaks the affinity
* virtualized layer between the data you store and the hardware underneath
* Three ways: blobs, tables, relational
* MS: Azure Blog Storage – REST API (using GET (even range requests) and PUT); PutBlock API to move blocks - transaction build up [must look this up]
* Azure Table Storage (Key/Value pairs)
* Initial relational effort: SQL Server Data Services (MIX 08) - REST API on top of SQL
* Customer reaction: We want to do TDS (MS native DB protocol)
* SQL Data Services (MIX 09), late SQL Azure: TDS (SQL Server) in the Cloud
* Similarity between internal and cloud architecture makes it easier for customers [agreed, even though this might me more of a problem]
* Demo: SQL Azure (http://sql.azure.com); Codeplex sqlazuremw (migration wizard) - migration from local SQL Server DB to the cloud (subset of SQL Server functionality, e.g. restrictions on certain value types, clustered indexes)
* Takeaways: 1) Storage in the cloud may look the same, but breaks the affinity problem 2) Pricing is relevant 3) SQL Azure factor for moving to cloud in the first place=
*

Pattern #5: Using the cloud for communications

* Classic approach: VAN, now replace by Internet direct file transfers
* Cloud approach: REST-based queues could be used for communication - not commonly used, problem: need to pass tokens around
* Putting a web facade in front of the queue doesn't work too well either due to firewall problems. HTTP polling is bad [why?]
* MS Solution: .NET Service Bus
* TCP Relay: outbound bi-directional socket, tunneled through the bus and kept alive on both sides. Enables routing of arbitrary protocols across company boundaries
* Alternative: Message Buffer, exposed using AtomPub, support retrieve, peek, lock
* Takeaways: Be careful consuming REST-based queues because of shared secret
* additional trouble because of REST
* service bus as potential solution

Last question: How can patterns be integrated?

* 1) Sample PHP (!) application running on Windows Azure, ported to GAE and EC2 (as ASP.NET)
* 2) Map reduce spreads load across Amazon, Google, MS
* 3) Store results in SQL Azure database
*

4) Coordinate communication using .NET Service Bus
*

How many prime numbers between 1 and 10,000,000? 40 jobs of 250,000 numbers
* WPF client app sends off job
* "I'm gonna submnit the job and pray"
*

Spontaneous applause as the demo actually worked
*

make sure you have a clear definition of cloud computing
* explore the 5 usage patterns
* think about the next steps for implementation and migration

Sunday, October 18, 2009

Context-Aware Service Composition in Pervasive Computing Environments

Sonia Ben Mokhtar, Damien Fournier, Nikolaos Georgantas, and Val´erie Issarny

http://www.springerlink.com.proxy.lib.sfu.ca/content/38252l8676501424/fulltext.pdf

The paper presents a context-aware service composition based on workflow integration.

Context-awareness is a property of a system that uses context to provide relevant information and or services to the user where relevant depends on user's task

They use ontologies to define context and then use this information to validate context.

They want to enable the user to perform a task anywhere and any time. Networked services are described in OWL-S extended with context information -> prec0nditions & effects + context attributes.

Context-aware service composition is done in two steps:
  1. context aware service discovery provides a set of services that are candidate to composition
  2. context-aware process integration provides a set of composition schemes that conform to the task behavior
Matching and discovery of services is done based on the algorithm proposed by palucci in matching the inputs and outputs of user tasks and processes with the inputs and outputs of advertised services. what is important is the fact that there can be subsumption relationship between the advertised and the required services.

During the integration process, contextual preconditions and effects of service operations have to be taken into account, and second, global task's contextual requirements have to be checked.

Filtering of components happens based on the following criteria:
  1. Starting from the actual state of the path, the task's following symbols can not be reached in the global automation
  2. The simulated context does not fulfill the contextual preconditions of the incoming operations.
  3. some attributes of the simulated context do not meet the global contextual requirements of the user task.

A Tale of Clouds: Paradigm Comparisons and Some Thoughts on Research Issues

Lijun Mei, W.K. Chan, T.H. Tse

http://ieeexplore.ieee.org.proxy.lib.sfu.ca/stamp/stamp.jsp?tp=&arnumber=4780718&isnumber=4780615

Preliminaries

Cloud Computing
  • Horizontal Cloud Scalability: connect and integrate multiple clouds to work together as one logical cloud
  • Vertical Cloud Scalability: improve the capacity of a cloud by enhancing the individual nodes in the cloud
Basically the above concepts talk about scalability in depth and scalability in breadth

Service Computing
  • to create a service composition, engineers may use specifications such as WSBPEL
  • to carry out workflows, webservices or other types of services might be used
Pervasive Computing
  • embedded in constantly changing computing environments
  • a well developed environment will enable the pervasive software to work eveywhere without extra effort
  • Environmental features are used to understand and react to the users. These environmental variables are referred to as context and are collected using different sensors and information in the environment
Comparing Cloud with Pervasive and Service Computing
  • Service Computing is good in providing functionality and providing flexible services
  • Pervasive Computing enables users to use software everywhere and provides slef adaptivity with respect to the environmental contexts
Cloud computing needs both functionality modeling and context sensitivity.

In terms of IO Cloud Computing is closer to Service Computing
In terms of storage, Cloud Computing seems to be closer to Pervasive Computing

Comparison of IO for cloud, service, and pervasive computing


Comparison of storage for cloud, service, and pervasive computing
Comparison of calculation features for cloud, service, and pervasive computing

How do computing entities plug into the system?
  • service computing: registration and discovery of srvices
  • pervasive computing: mobile computing entities join and leave the environment
  • cloud computing: Applications can be entity aware to plugin heterogeneous computing entities. New computing entities can be added or should be added to the system dynamically and on the fly.
How do computing clouds store and access large-scale data?
  • Pervasive computing: mobile entities store their data in the environment
  • Service computing: usually the amount of data stored is negligible and services are more often stateless services which do the calculation but do not deal with storing information
  • Cloud Computing: there is finite amount of space for storage on the cloud too, so cloud systems may need to share data or may need to provide some sort of inter-cloud communication in order to scale better and transfer some of the data to other clouds
How does a computing cloud become adaptive to both internal and external changes?
  • Service computing: environmental changes, evolving quality of services,
  • Pervasive computing: quality of mobile entities involved
  • Cloud computing:
  • How does the environment change for a cloud?

An Architecture for Non Functional Properties Management in Distributed Computing

[[ CHECK REFERENCES ]]

Pierre de Leuss1, Panos Periorellis1, Theo Dimitrakos2 and Paul Watson1

http://www.cs.ncl.ac.uk/publications/inproceedings/papers/1149.pdf

Three categories for Grid
  1. computational grid
  2. data grid
  3. service grid: instead of providing computational or data resources enables sharing of specific functions defined and exposed as services
Cloud Computing: Resources come from the cloud, a public network, rather than a specific identifiable system.

The rational behind cloud computing:
  • the underlying complexity of the system and their characteristics should not only be hidden from the underlying users but for the most part to the technical users as well. Amazon Simple Storage Service (SimpleDB) is a webserivce providing storage capabilities.
  • It is not only about computation and data
Potential Research challenges
  • adaptability in response to changes in the nonfunctional requirements of the system
  • From changes in internals of components to external changes
  • Reaction on Message interceptions received by the infrastructure
  • Safety and Security of the profiles
Interesting points
  • nonfunctional properties management
  • rapid adaptation
  • dynamic composition
  • distributed system integration
IBM's perspective on autonomic computing
  • Self-configuration: adapts automatically to dynamically changing environments
  • Self-healing: system discovers, diagnose, and reacts to disruptions
  • Self-optimizing: systems monitor and tune systems automatically
  • Self-protecting: systems anticipate, detect, indentify and protect themselves.

Cloud Computing – Issues, Research and Implementations

Mladen A. Vouk
Department of Computer Science, North Carolina State University, Raleigh, North Carolina, USA

http://loveni.name/clover/Cloud%20Computing%20-%20Issues,%20Research%20and%20Implementations.pdf

In the context of cloud computing the key question should be whether the underlying infrastructure is supportive of the workflow oriented view of the world.

Characteristics of a cloud environment
  • support large number of users ranging from very naive to very sophisticated
  • support construction and delivery of curricula for these users
  • generate adequate content diversity, quality, and range
  • be reliable and cost-effective to operate and maintain
In the context of the VCL technology an image is a tangible abstraction of the software stack.

Service Composition and Provisioning
  • sample and combine existing services and images
  • create new composites, update them, etc.
  • workflow aggregation and automation
Cloud computing research issues
  • image and service construction
  • Cloud provenance data (process, data, workflow, system or environment)
  • optimization
  • image portability
  • security
  • utilization

Tuesday, October 13, 2009

Calling the cloud: Enabling mobile phones as interfaces to cloud applications

http://people.inf.ethz.ch/oriva/pubs/riva_middleware09.pdf

Sweet Home 3D App: http://www.sweethome3d.eu/download.jsp

The discussion is that the applications are executed either on the mobile phones or on the server. However, there is a need to split the application between the two.

Application profiling is done by providing a consumption graph.

Measured parameters:
  • The consumed memory
  • The data traffic generated both in input and output
  • The code size
The consider the amount of transferred data as the major factor in creating the consumption graph.

Problems with the approach:
  • The instrumentation is done manually which requires access to the source code for the bundles
  • The focus is only on the user interface because they are considered as more suitable resources to be moved to the cloud
  • Also they argue that the hardware requirements vary from phone to phone and thats why they are filtering that parameter of CPU usage out. And thus, the bundles CPU cost is omitted.
  • The developer marks bundles as movable and nonmovable. What is the reason for a developer to classify bundles as movable and nonmovable? how do you know that the classification is correct and that it works properly?
  • They have assumed that every bundle exposes only one service and not more. So, the application does not work with more services
In the consumption graph, every vertex is a bundle and every edge is a service dependency.

Bundle Characteristics:
  • type: movable or nonmovable
  • memory consumption
  • code_size
  • in: the amount of input data to a bundle B
  • out: the amount of data sent out of a bundle B
The modularity considered is at the functional level and not at the class or function level. But whether there is better approach to think of modularity is another problem that potentially shoudl be addressed.

The distribution happens only between two nodes and not more.

The optimal cut maximizes or minimizes an objective function and satisfies a phone's resource constraints.

  • k: bundles running on the mobile device
  • t: bundles on the mobile device with dependency to bundles on the server
  • alpha: the bandwidth
  • fij: how many times communication between the two bundles happens!!! (weird idea)
  • beta: the capacity of the coomunication link + the installation overhead! ( How does it reflect on the installation overhead when it is a completely client dependent parameter!)
  • also the proxy cost represents how much effort is required for the proxies to be created in order for the proper communication to happen between the client and the server

DR-OSGi: Hardening Distributed Components with Network Volatility Resiliency

http://people.cs.vt.edu/~tilevich/papers/DR-OSGI.pdf

  • A clear exposition of the challenges of treating the ability to cope with
    network volatility as a separate concern that can be expressed modularly.
  • An approach for hardening distributed component applications with re-
    siliency against network volatility.
  • A proof of concept infrastructure implementation|DR-OSGi|which demon-
    strates how existing distributed component applications can be hardened
    against network volatility.
They use R-OSGi as the base for their system and protect against network volatility.

Scenarios
  1. A remote service becomes unavailable|
  2. A temporarily unavailable remote service becomes available again
Case Studies:
  1. Log Service
  2. UserAdmin Service
  3. Distributed Lucene
  4. DNA Hound

Wednesday, September 30, 2009

Composing REST services and collaborative workflows

Bite offers a language for composing REST services.

REST composition enables some sort of data flow composition model (similar to Yahoo Pipes)

Human interactions happen by forms, instant messaging, linked email exchange, etc.

Collaborative services such as Lotus support unstructured interactions between ad-hoc communities linked via common business goals. Complex collaborative applications.

Design Goals for Bite
  • Atom life-cycle
  • Lightweight process model
  • Scripting Approach
  • Language extensibility
  • Web and human integration
It allows for extensibility of the language and it supports parsing the scripts for the languages. also GUI elements can be easily connected to the bite script and their states can get updated by sending reploes to their interfaces.

Semantic-based Contxt-aware Dynamic Service Composition

http://portal.acm.org/citation.cfm?id=1516533.1516536

Contributions
  1. Not only system designers but also end users to specify rules on how to compose context-aware applications
  2. It supports both rule-based and learning-based context-aware service composition
  3. It utilizes semantic similarities among components to improve its adaptability in a dynamic environment
  4. supports seamless service migration which autonomously composes a new application and migrates onto it when user context changes

Componetn Service Model with Semantics (CoSMoS)


There is a UML meta model for CoSMoS wichi helps with Defining
  • Functional Informaiton
  • Semantic Information
  • Contextual Information
  • User specific rules

Component Runtime Environment

CoRE consists of the following pieces
  • Dscovery Manager
  • Execution Manager
  • User Manager
Three methods for context acquisition
  1. metadata of the components: context information is embedded in the components metadata
  2. context-aware discovery or user submodule: acquiring context information through existing context aware technologies
  3. inference: infer context based on a set of facts
Semantic Graph-based Service Composition (SeGSeC)

two approaches are used
  1. Rule-based
  2. Learning-based
Problem in creating the workflow is that, semantic correctness of the workflow gets checked at the end of its execution, wherease it can be included in the process of execution so that the semantically valid components get discovered first before actually having to deal with the composition and orchestration of components

The learning algorithm for selecting a component
Pi = max (SSi,j x (CMDj + const)) 1 <= j <= n SS i,j = semantic similarity between two components CMDj = context matching degree. (how well the component matches the context it is used in based on the previous experiments in using this component in this context). To decide about context matching conditioins a C4.5 DT algorithm is used collecting information about a composed workflow. context aware dynamic service composition systems have problems for the following reasons:
  1. predefined rules usually cannot be modified once they are deployed
  2. it is difficult to define a generic rule that is applicable to every user
  3. some users preferences may be too complex to define as a set of rules
interesting evaluations for dynamic composition of context-aware services. I skipped the experiments but they might be worth reading.

Combining Quality of Service and Social Information for Ranking Services

Qinyi Wu, Arun Iyengar, Revathi Subramanian, Isabelle Rouvellou, Ignacio Silva-Lepe, Thomas Mikalsen
http://www.research.ibm.com/people/i/iyengar/ICSOC2009.pdf

FROM BEFORE

The authors propose ServiceRank as a method to rank services based on the opinion of those using a service as well as local behavior of services, mainly response time and response failure.

Problems:
  1. what is the proper answer response from a service? Does it have to match a value expected by the client or can it be just any response? How does the client know of the proper value for a computation prior to using a service? Does it then require to have knowledge about the service?
  2. How does data from all services is collected? is there a single repository where the services publish the results their experiments with the services?


ServiceRank as a ranking algorithm to bring opinion of the community about using a service into consideration when deciding on the quality of service

Problems:
  1. It is not clear how requesting services decide about the correctness of the returned data? is a response rated as correct when only a response is received, or when the correct response is returned. In the first case, how would the service be guaranteed as the proper service, and in the second case how are we going to know what is the returned value for the request?
  2. How the cumulative data about the experience of other services gets collected? There are monitoring services that keep the record of all services. There are going to be multiple monitor services associated with the applications connected to the SOAlive. This is not a real case for the real world applications.


The paper also contains a set of experiments to demonstrate how a composition can be affected by quality of service and how the cumulative study for service composition.

The approach can be used as a method to provide cumulative study on service composition within a composition chain.

Tuesday, September 29, 2009

Evolving Services from a Contractual Perspective

Vasilios Andrikopoulos, Tilburg University, Netherlands
Salima Benbernou, University Lyon1, France
Mike Papazoglou, Tilburg University, Netherlands
http://infolab.uvt.nl/pub/andrikopoulosv-2009-124.pdf

-------------------------------------

two views are introduced

exposition vs expectation
required vs provided

A contract records the benefits and the obligations
the contract describes what is the acceptable results and contributions for a task described

The client formulates the contract, instructing the provider on what functionalities are going to be used.

Shallow Changes are divided into two categories
  • contractual invariance changes: a simple mediation would do the trick
  • contractual evolving changes: changes that require revisiting the contract but do not require changes in the other party

Monday, September 28, 2009

a multi-agent system for the reliable execution of automaticall composed ad-hoc processes

http://www.springerlink.com/content/q7684735gl420772/

Scenario
  • the user states his pereference > comedy movie, restaurant with french cuisine
  • ad hoc composition is required when the user is roaming
  • location based composition for nomadic users
Approach

Sunday, September 27, 2009

Components AND Services: a marriage of reason

http://www.i3s.unice.fr/~mh/RR/2007/RR-07.17-P.COLLET.pdf

The article provides a comparative study and positioning of components and services with the overall objective of arguing that the two approaches can be seen as much more complementary than competitors.

components
  • black, white, grey boxes
  • arbitrary granularity
  • state
  • reflection
  • composition
  • structural composition: binding between components needs to be controlled. or the connections between components and sub-components.
services in SOA
  • black bozes
  • coarse granularity
  • loose coupling
  • statelessness
  • discovery
  • orchestration
CBSE is more favored in middleware and embedded systems which exhibit strong NF constraints

SOA is typically preferred in e-business applications which necessitate to chain calls to online services provided by distinct companies.

Applicatoin to Dynamic Communities System

Amui is a messaging server for dynamically and automatically grouping users according to their common interests. the users are filtered based on their topics of interests and are redirected to chat rooms associated to user defined set of keywords. Users will receive ads and can also include some plugin-like applications to carry more content to the users, e.g., videos, etc.

Amui Server is composed of three subcomponents:
  • AmiFacade
  • The Core
  • manage users
  • manage groups: group creation/administration
  • match user keywords to group topics (UGManager) : implements the main functionality
  • Advertisement Proxy
Fractal WS is a toolkit to make compatible any Fractal component with the technology of Web Services. It uses generative programming and statically typed stubs are generated.

Fractal SCAproposes bidirectional bridge between Fractal and the SCA. From Fractal to SCA components are enhanced so that they are able to create SOAP bindings. On the other hand, SOAP communication service is provided to handle communications from SCA to Fractal.

Enhancing Residential Gateways: OSGi Service Composition

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4140904

Scenario

Home Security Service:

Fire
  • ringing alarm
  • unlocks the doors
  • calls the fire station
  • informs John using the most appropriate device
Intrusion
  • locks the doors
  • calls the police
  • informs John
They propose a BPEL like description for providing a composition contract where a virtual bundle gets loaded to the OSGi ServiceRegistry with the BPEL specification for the service, registers itself with the BPEL engine and enables the engine to use the loaded BPEL. The BPEL is then used by the system to do an orchestration of services in the OSGi ServiceRegistry.

OWL-OS/OSGi is used as a semantically enabled OSGi framework in order to provide semantic enhancements to OSGi. The partner link is expanded with some more elements to incorporate semantic information for the partners providing the desired methods or porttypes.

Dynamic Service Composition using Semantic Information

http://portal.acm.org/citation.cfm?id=1035174

Static Service Composition (proactive)
  • workflow or state chart is designed to describe the interaction pattern
  • BPEL, WSCI
Dynamic Service Composition (reactive)
  • autonomous application composition
  • eFlow[4], SWORD[13]
  • dynamic service composition is useful for ubiquitous and end user applications
Scenaio
  1. get address for restaurant and home
  2. invoke the direction generator web service
  3. print out the result image
four domains are introduced
  1. data types
  2. semantics (Concept)
  3. logics
  4. components
Comparing CoSMoS with SCA
  • data types in CoSMoS are equivalent to DSO in SCA
  • semantics are not supported in SCA
  • logics is somewhat meaningless for CoSMoS
  • components have equivalents in SCA
  • composites are not defined in CoSMoS
CoSMoS is a new component model CoRE converts the metadata for the discovered components into CoSMoS and CORE has a pluggable architecture.

Semantic Graph Based Service Composition (SeGSec)

Request Analyzer <-> ServiceComposer <-> Reasoner <-> ServicePerformer

"ServiceComposer genrates the execution path by connecting operations of components", How?? This is not something trivial. Matching the operations for different interfaces for services is quite difficult. I am not sure how it is done. How do they perform this matching of operations at the high level? Reasoning and semantic resolution should be performed before the ServiceComposer module generates the path.

for other systems, there should be a template implementing the requested service. so a B2B template matching is required in this case.

Towards a Programming Model for Service-Oriented Computing

composition model is built on top of component models. We look for a customization without source code modification.

Composition
  • behavioral
  • adaptation -> point of variability
  • structural
  • mediation -> message processing
Dynamic Binding
  • dependency injection -> automatic dependency resolution
  • Mediation Model -> dynamic mediation -> dynamic binding
IMPORTANT DEFINITION: a service is a visible access point to a component. A component can offer multiple services or reference multiple services

service specification -> access channel
  • interface -> WSDL
  • policies
  • behavioral description -> BPEL
service component implementation (similar to the SCA model in which we have comonent, composite, reference, services, properties)
  • service spec: characteristics of a service
  • required services
  • service properties
  • container directives
  • implementation artefacts
service component
  • Name
  • implementation
  • values for properties
  • specification + resolution for services
  • wires + queries + QoS policies
directives for service composition could be instructed using either Pragmas or Control files
  • Control Files: BPEL or other workflow descriptions
  • Pragmas: Annotations within the code
Structural Composition (Mediation)
  • wiring: wires represent the flow of messages
  • Bundle -> a collection of services
  • Event-Driven Composition
  • Mediation
  • Content-based Routing
  • Transformation -> transform and map messages
  • Augmentation -> adding additional information to the message
  • Side Effect -> extra operation on the messages
Behavioral Composition - Process Oriented (Adaptation)
  • workflow oriented
  • state machine
  • UML state diagram
  • BPEL -> DAG activity nodes
  • short running
  • long running
  • has one or more interfaces
  • different implementations for canceling
SDO (Service Data Object) = a uniform way of representing service data + the abstract tree is used to access data irrespective of how it is provided

IMPORTANT

SOA component model improves over CORBA/J2EE/COM
  • control language
  • XSD & WSDL are more tolerant of interface evaluation
  • call/return + one way messaging
  • rich contracts
  • QoS
  • behavioral description
  • mediation + intermediaries
SCA -> abstraction of implementation concerns
Web Services -> abstraction for interoperability concerns

Tuesday, September 01, 2009

a distributed service-oriented mediation tool

devices do not use the same data representation and alignment transformations are needed.

mediation: to aggregate disparate information sources in a timely fashion which enables interoperability and integration of services.

mediation also helps with adding new quality of service concerns without modifying the code at the client side.

four categories of mediation
  1. control mediation -> routing, filtering, aggregation
  2. transformation -> matching types
  3. QoS mediation
  4. SLA enforcement -> transcoders
Some open source solutions for ESB providing open solutions based on Java Business Integration (JBI)
  1. Apache ServiceMix
  2. ObjectWeb Petals
  3. Codehaus Mule
The model for mediation can be changed from reactive to proactive. in the current reactive model, there is a mediation service which gets hit by the information that needs to be mediated. This is mainly true in case of transformation mediation, control, and maybe SLA. However, the QoS mediation is still a proactive mediation mechanism as it needs close coupling with the service itself. To bring security, e.g., it is not possible to send data to a third party service in the hope of receiving the encrypted service back.

each mediation application is defined as a set of connected components and each component implements a single mediation operation.

three ways for generating mediators
  1. search for pre-existing mediators
  2. to use generation code tool that generates the code of a skeleton of mediators with methods to read and write on ports
  3. specialized mediators such as the ones for generating Web service clients and service bridges
the nodes for mediation are built over OSGi which provide facilities to load and update Java code dynamically. two modules are required on top of each node
  1. the MOM bridge
  2. the administration module
administration console manages the relations between administration modules in a centralized mannger. The console take care of monitoring different adminstration modules spanned over several nodes.

a mediator registers a mediator factory taking care of all meatiors installed in the system. also it installs mediators.

MOM in the mediator enables connecting the in ports to the out ports

in -> represents a subscriber to a channel
out -> represents the publisher to a channel.

The publisher to a channel receives messages which then get forwarded to other mediators connected in the other nodes and subscribed to the same channel.

JORAM is a JSM used in the MOM module
a DHT-based algorithm such as Scribe can be a more scalable solution. What is scribe?!

failure detector based on two-ring algorithm which supports new node arrivals and failures in nodes. Does it have anything to do with the mediators on the nodes and detecting whether they stay up or go down?


Monday, August 31, 2009

Mediation and Enterprise Service Bus A position paper

ESB tries to isolate the coupling between the service called and the transport medium.

ESB brings flow-related concepts such as transformation and routing to SOA. Flexibility in the transformation layer + easy connection between services.

ESB requires to provide the following characteristics:
invocation, routing, mediation, messaging, process choreography, service orchestration, complex event processing, QoS, management.

----
Paper

ESB is a mediation solution: early mediation solutions evolved to enhance the gloabl quality of services provided by large scope of DB systems. ESB uses mediation to facilitate the design of applications based on Web Services.

Wiederhold in [8, 9], mediation is "a layer of intelligent middleware services in information systems, linking data resources and application programs".

a mediator as "software module that exploits encoded knowledge about certain sets or subsets of data to create information for a higher layer of applications. It should be small and simple, so that it can be maintained by one expert or, at most, a small and coherent group of experts"

types of mediators:
  • examiners -> content body for validation, authentication, etc.
  • transformation mediators -> content body for data type transformation
  • transcoder mediators -> modify the format and not the content. helps to go through different protocols
  • cache mediators
  • routers
  • operator mediators -> comparators, aggregators, etc.
  • clone mediators -> dispatch a unique request to several services
ESB is more than a mediator. It also provides
  • a trading service in order to find services
  • commuication service (mostly synchronous with MOM and pub/sub)
  • orchestration service (based on BPEL)
Mediation in ESB includes
  • Security
  • Dynamic Routing and dispatch of requests (load balancing, responding to data source failure)
  • other non-functional actions related to QoS management -> quality measurement, tracing, caching, failure detection, recovery.
Apache Synapse -> mediation for web services

Open-source ESBs
  • Celtix
  • Petals
  • Objectweb
  • Mule
  • ServiceMix
  • OpenESB
Java Enterprise Service Bus API (JBI) doesnt closely deal with mediation but provides standardization on exchanges between services.

with workflow languages we can model the our application but we cant serparate the proxies from the application model. as a consequence any change in the implementation will aslo affect the model and vice versa, even though in many situations it is not desirable.
Looking at the picture above, the question is whether an event processing engine handles the process of mediation and to what extent?

designing a mediation element should be kept separate from composing a mediation element.

Mediation tool development is used. This tool allows to
  • describe mediation chains with ADL
  • describe the execution environment
  • automate the deployment and administration

This is a very interesting image as it is very close to things that we have in mind with mediation and enabling composition of services through mediators.

ESB combined with a pub/sub middleware seems to be the way to go towards this direction. In this sense mediator is a component that is able to receive 1 to n pieces of data and send 1 to n pieces of data. it can be seen as a binding between clients (1..n) and services (1..n).

A mediation chain is close to the concept of partnerlink in BPEL.

OSGi can help ESB with improving dynamism. so mediators can be defined and implemented as OSGi bundles to be dynamically loaded to the ESB infrastructure to enable communication between services of different types.

Sunday, August 30, 2009

** on adopting content-based routing in service oriented architectures

WS-Eventing and WS-Notification are defined in SOA to introduce asynch notifications among web services.

WS-Notification has the following parts
  • WS-Base Notification
  • WS-Brokered Notification
  • WS-Topic
WS-Notification gets close to WS-Eventing when it comes to WS-Base Notification which introduces the core roles of
  • notification producer
  • subscription manager: enables a subscriber to pause/resume/cancel a subscription
  • subscriber
  • notification consumer
WS-Brokered Notification introduces concept of notification broker rather than direct communication between the source and the sink of messages.

ESB also supports Content-Based Routing (CBR) in order to transfer messages from one service to another in a B2B type of interaction. Messages can be routed based on some pre-defined rules.

Mule and IBM Web Sphere are examples of ESB messagebrokers.

** Addressing QoS for pub/sub and CBR systems is a challenge

Different discovery approaches
  • SeCSE: an apprach based on facets for discovery of services in different phases
  • DIRE is the publication infrastructe developed for SeCSE
  • in DIRE registries can hold any type of service
  • REDS is used as a pub/sub mechanism to replicate service descriptions
  • Service descriptions are shared among all registries subscribed to a topic
  • WS-Discovery proposal
  • IP multicast -> only applicable to large scale systems
  • Meteor-S uses MWSDI and JXTA to provide p2p network of UDDI registries

overall two methods for service discovery
  • replication of service descriptions
  • propagation of queries within the network of registries

Vision of SOA promotes an environment which is GLOBAL and OPEN for service providers to offer their services and consumers to access and use them. Composition at runtime is also promoted to enable dynamic adaptation to changes in the environment.

--> PAPER ARGUMENT: a scalable service discovery infrastructure to allow different organizations to offer and access services globally, complemented by a pub/sub infrastructure to suit the needs of those systems that have an inherently asynch behavior and to monitor the environment.

------------

REDS is a framework of java classes to easily build a modular CBR infrastructure.
  • Defining message and filter
  • Routing Strategy can be modified
There are three challenges in CBR:
  • the matching challenge
  • the security challenge
  • the reconfigurability challenge
Open issues:
  • QoS for the middleware (e.g., how messages are delivered FIFO, random , etc.)
  • Dependability vs Scalability
  • Expresiveness vs Scalability
  • Reflectiveness
  • Context-awareness: context information to be used as notifications

Saturday, August 29, 2009

A taxonomy of quality of service aware adaptive event dissemination middleware

This paper has to be read later in more details. It provides an overview of EBMs and the set of QoS requirements that they need to admit to.

Probelm that paper tries to address:

few middleware options provide support for nonfunctional service guarantees. There is a lack of comprehensive survey that analyzes all the EDMs providing support for QoS.

some of the QoS requirements for the EBM or EDM
  • security
  • load balancing
  • reliability
  • fault tolerance
  • ordering
  • semantics delivery
The event model has three message types
  • advertisement
  • notification
  • subscription
There are two types to composite events
  • temporal: time dependencies between primitive events.
  • spatial: conjunctive/disjunctive combination of individual events

michlmayr 2008 - two papers

1. publish/subscribe in the VRESCO SOA Runtime

The overall architecture for VRESCO looks very similar to a combination of ReCoIn and OSGiBroker. Client Programs make a call to the system using DAIOS and the client library. VRESCO runtime provides support for publishing/subscribing/querying/notification using the engines that VRESCO offers.

Also, persistence is supported for storing events into the DataBase and for querying a history of events, etc.

Below is a snapshot for VRESCO event architecture, which is again very close to the Event architecture that OSGiBroker offers
VRESCO makes use of ESPER event processing engine in order to enable filtering of events and managing events when it comes to their collection and aggregation to infer some meanings from the published events.

Listeners are Esper EQL commands that request for a specific type of event to be delivered to the subscriber registered with the event processing engine.

Persistence is supported in their VRESCO pub/sub system by using NHibernate engine which persists all the events. The events history can then be queried for specific set of exchanged events.


2. Advanced Event Processing and Notifications in Service Runtime Environments

VRESCO supports the following set of events

  • Binding and Invocation Events
  • Query information events
  • user information events
They consider two types of external consumers
  1. human
  2. services
  3. WS-Eventing
  4. WS-Notification
Even Ranking
  • Priority-based
  • Hierarchical
  • root events have higher importance compared to leaf events
  • Type-based
  • Content-based (the keyword exception is more important than warning)
  • Probability-based (frequent events are less important than infrequent events)
  • Event Patterns
Event-correlations are used to avoid losing track of events and their relationships. something like an event identifier can be used to correlate events to one another.
I think event correlation can be done either by scanning the content for events or by looking into channels and separating events based on the channels they get published to.

Correlation sets enable users to track all relevant events for a correlation ID without losing the track of what is happening between these correlation sets.

VRESCO Web service Eventing specifies 5 different operations
  • subscribe
  • unsubscribe
  • renew
  • expires
  • subscription ends
important: due to the large set of published events, relational databases are not always preferrable as a more efficient indexing strategy is required. Vector space engine as described in [17] might be a better choice. The advantage is that the search returns a list of fuzzy matches together with a similarity rating.

Throughput for different methods of publishing events can be measured. Also, since we are using caching, it is easier to see how throughput affects the overall behavior of the system. As for events, we can also store all the events in the DB and query the DB only in situations needed. This decreases the chance of having low preformance as a result of dealing with the relational DB.

Their related work is very interesting and important. It is a MUST READ set of papers.




Sunday, August 16, 2009

Henssen 2008 - QoS attributes

Promotes the use of MCDA (multi-criteria decision Analysis
  • important to analyze weights related to criteria + weights related to interactions between criteria



Dealing with Quality Tradeoffs during Service Selection

aggregation approaches for integrating weights into the service selection procedure for a QoS:
  1. compensatory [29][30]: they amount to being substitution rates. The priorities for different criteria to be expressed on the same scale
  2. noncompensatory [4, 8, 21, 31]: weights are simply a measure of relative importance of the criteria involved. They are only used to indicate the relative importance

Promotes the use of outranking methods for defining global priority constraints

outranking relation is a binary relation S on the set of potential choices A such that aiSaj to decide that ai is at least as good as aj.

Pj(a, b) = Fj[dj(a, b)] for all a, b
dj(a, b) = gj(a) - gj(b)

where gj(a) is the score of service a over quality of service a

There are six categories of functions for F
  1. immediate preference
  2. indifference threshold
  3. increases continuously until reaching the indifference threshold
  4. comprises an indifference and a preference threshold
  5. increases continuously between an indifference and a preference threshold
  6. Gaussian law with a fixed standard deviation
  • aggregating the preferences
  • wj is the preference for characteristic j


  • outranking flows
  • The positive outranking flow expresses how an alternative a is outrankking all the others (n-1 alternatives)
  • The negative outranking flow indicates how an alternative a is outranked by other n-1 alternatives

complete ranking of PROMETHEE method is derived using the following formula

The paper is clumsy when it comes to their real experiment. values are drawn from no where and it is not clear how these values are calculated. The author needs prior knowledge and there is a lack of proper description

Saturday, August 15, 2009

Monitoring the QoS for Web services

QoS metrics
  • Provider-advertised (execution price)
  • Consumer-rated (service reputation)
  • Observable
  • IT Level
  • Business Level
The QoS value needs to be recomputed whenever the execution of a service instance is computed

observational model
  • service monitoring architecture
  • QoS metric computation
  • high volume of service operational events
  • complexity of metric computation
  • metric value persistence
computing/updating metric values in realtime using a high performance metric computation engine

contributions of the paper:
  • monitoring-enabled SOA infrastructure
  • declarative event detection
  • event routing
  • mointoring QoS with small programming efforts
  • Efficient QoS computation
  • compilation interpretation approach
  • improve event processing throughput
  • custom executable ECA rule at buil time
  • observatoin model is transformed to invoke generated code
  • model driven planning to enable wait free concurrent threading
QoS is a broad concept encompassing a large number of context-dependent and domain-specific nonfunctional properties.
  1. Process Monitor Context
  2. Service Mointor Context (Service Interface Monitor Context)
  3. QoS metrics
Event (eventPattern)[condition]|expression
  • eventPattern = service operational event / change in the metric
  • condition = circumstance to fire an event
  • expression = association predicate + value assignment

Metric computation engine takes observation models as input and generates event subscriptions for the semantic pub/sub engine. Thus, the events from one service engine are delivered to another service engine.

High Performance Metric Computation for QoS Metrics

ECA rules are mapped to a state chart with the transitions from events to metrics or metrics to metrics.

There are two parts to execution of state charts.
  • Interpretation of state chart logic
  • Interpretation of the expressions within the states
  • Thread scheduling for executing events

Saturday, August 08, 2009

Examples of Mashups

IBM's QED Wiki
Yahoo Pipes
Google Mashup Editor
Microsoft's Popfly

Semantics identified by Web services community
  • data (I/O)
  • functional (behavioral)
  • nonfunctional (QoS, policy)
  • execution (runtime, infrastructure, exceptions)
Semantic Annocation of Web Services
  • hREST
  • SA-REST
Mashups available on the Web
  • ProgrammableWeb
  • APIHut

Friday, August 07, 2009

A Model for Web services Discovery with QoS

Ran, S. 2003. A model for web services discovery with QoS. SIGecom Exch. 4, 1 (Mar. 2003), 1-10. DOI= http://doi.acm.org.proxy.lib.sfu.ca/10.1145/844357.844360


Service Supplier -> provides Certifier with QoS information -> Certifier either accepts or downgrades the information -> returns the result back to the supplier -> supplier registers the service (the functional descriptions + the certified QoS) + UDDI checks with certifier the QoS.

Web services are provided by third parties and are invoked dynamicall over the intenet and thus their QoS can vary greatly.

A framework is need to capture QoS provided by supplier and QoS required by customer and try to provide a match between the two.

ISO 8402 Description for Quality: the totality of features and characteristics of a product or service that bear on its ability to satisfy stated or implied needs.

QoS: a set of non-functional attributes that may impact the quality of the service offered by a
Web service

Categories for QoS

1. Runtime related quality of service
  • Scalability (related to throughput and performance)
  • Capacity (concurrent requests)
  • Performance ( speed in completing a service request)
  • Response time
  • Latency
  • Throughput (number of completed service requests over a time period)
  • Reliability
  • Mean Time between Failuers
  • Mean Time to Failure
  • Mean Time To Transion
  • Availability
  • Robustness / Flexibility
  • Exception Handling
  • Accuracy
2. Transaction Support
  • Integrity ACID ->
  • Atomicity -> entirely executes or not at all
  • Consistency -> maintains integrity
  • Isolation -> runs as if no other transactions are present
  • Durability -> the results are persistent
3. Configuration Management and Cost Related
  • Regularity ( how well the service is aligned with the regulations)
  • Supported Standard (if the service complies with standards)
  • Stability / Change Cycle (frequency of change in the service in terms of interface)
  • Cost ( the cost for using the service)
  • Completeness (the difference between the specified set of features and the implemented set of features)
4. Security related QoS
  • Authentication
  • Authorization
  • Confidentiality
  • Accountability
  • Traceability + Auditability
  • Data Encryption
  • Non-repudiation (A Principal can not deny requesting the service after the fact)

Wednesday, May 13, 2009

Context Aware Middleware

Grounded Theory for the survery. The theory emerges from the interrelatons of general categories

Traditional Middleware: Hide heterogeneity and distribution

in Pervasive Computing this heterogeneity and distribution is dealt with in Middleware, without hiding them.

Types of Context
  • Environment (infra-structure, vs, or self-contained)
  • Storage (ordering data based on context information, vs, centralized storage facilities for storing
  • Reflection (reification and absorption) (aapplication, middlware, context info)
  • Quality
  • Adaption (transparent, profile, rules)
  • Migrationi (adaptive middleware systems)
  • Composition
Aura
1. task oriented
2. task manager: managing tasks
3. environment manager: managing services
4. context observer: managing context (i.e., the intent of the user)

CARMEN:
1. proxies
2. proxy will migrate with the user
  • resource move with the agent
  • copy the resources when they agent migrates
  • using remote reference
  • re-binding to new resources
Resources have profiles
  • user profiles: preferences, security, ...
  • device profiles: hardware and software devices, ...
  • service component profiles: interface for services
  • site profiles: a group of profiles belonging to a single location
CARISMA
Profile: metadata of the middleware
1. passive profiles: actions the middleware should take when specific context events occur
2. active profiles: relations between services used by the application and the policies to deliver them

reflection is used to alter the profile kept by the middleware

Cooltown
Devices, People, and things are identified by a URL
  • context (where, when, who, what, how)
  • relationships (Contains, isContainedIn, isNextTo, isCarriedBy)
CORTEX
based on sentient objects: sense and views the behavior of neighbouring objects, reasons about it, and manipulates physical objects accordingly. They dynamically discover each other and share context information
  • Publisher - Subscriber: discovery
  • Group Communication
  • Context
  • QoS management
configured at deployment time, configured at run-time using Java Reflection

Gaia
a meta-operating system
  • location
  • context: collected by context providers
  • event
Entities:
  • application
  • services
  • device
  • person
MiddleWhere
uses location providers
  • location service
  • spatial database
  • reasoning engine
The model for location consists of (Points, Lines, Polygons)

Quality of location information (resolution, freshness, confidence)

MobiPADS
Mobilet is the entity that provides a service
  • slave: resides on the server
  • master: resides on the mobile device
context of mobile devices: 1. processing power, 2. memory, 3. storage, 4. network device, 5. battery

applications have access to reflective interfaces for context, service configurations, and adaption strategies

SOCAM: ontologies to model context

Context Provider: external or internal context
Context Interpreter: external context provider
Context Database: instance of the ontology of context
Context Reasoner: to derive more contextual information
Service Location Service: the registry for locating services of Context Providers
Context-aware Mobile Service: services that a context provider registers with the Service Location Service

rules are used to associate services with context information

Types of Context
Geographical Context
Physical Context
Social Context
Organisational Context
User Context
Action Context
Time Context

Monday, May 11, 2009

A survey of Adaptive Middleware

  • Middleware is connectivity software that encapsulates a set of services residing above the network operating system layer

Taxonomy of Middleware (Emmerich)
  • Transactional
  • Message-oriented
  • Object-oriented
  • Procedural

Types of Object-oriented programming models
  1. CORBA
  2. RMI
  3. DCOM

Supporting paradigms for adaptation include:
  • Computational Reflection:
  • Component-based design
  • Aspect oriented programming
  • Software design patterns
Computation Reflection:
  • the ability of a program to reason about and alter its own behavior
- Reflective Systems (Base-level objects): functinal aspects of the syste
- Self representation (meta-level objects): implementation aspects of the system
- MOP (meta object protocol): an interface to inspect and modify the base level objects

*Behavioral reflection in middleware: modify the behavior of a program by generating code at the self representation level and injecting it to the base level

Component-based design:

components are self-contained
large scal reuse of components for COTS
late composition and late binding are supported

Examples: DCOM, EJB, Corba Component Model (CCM)

Aspect Oriented Programming

* intervened cross-cutting concerns in complex programs
* disentangling the cross cutting concerns leads to simpler software development, maintenance, and evolution

Software Design Patterns
The goal of software design patterns is to create a common vocabulary for communicating insight and experience about recurring problems and their known refined solutions

4.Taxonomy of Adaptive Middleware

1. Schmidt's classification:
  • host infra-structure: higher level API than the OS, NP, generic services
  • distributioin: RMI, solves heterogenity of NPs and OSs
  • common services: common functionalities such as fault tolerance, security, load balancing, event propagaion, logging, persistence, ...
  • domain services: specific class of distributed applications
  • application
2. Adaptation type
  • Customizable -> static compile time
  • Configurable -> static, before starting the system
  • Tunable-> dynamic, before using the system. The middleware core remains intact
  • Mutable-> dynamic during run-time
3. Application Domain
  • QoS
  • Dependable
  • Embedded
QoS-oriented middleware is classified to the following categories:
  • Real-time middleware
  • Stream-oriented middleware
  • Reflection-oriented middleware
  • Aspect-oriented middleware
Reflection
  • Structural: ability of a system to inspect and modify its internal architecture (architecture and interface)
  • Behavioral: ability of a system to inspect and modify its computation(interception and resources)
Existing Aproaches and middleware system

Ace
  • Layer: Host-infrastructure
  • Pattern: Service configurator pattern
  • Type: repeatedly tunnable middleware
ACE ORB (TAO)
  • Pattern: Strategy Design Pattern
  • Type: configurable middleware - repeatedly tunnable middleware
  • Layer: distribution
Component-Integrated ACE ORB (CIAO)
  • Pattern: component based design
  • Layer: distribution
  • Type: configurable middleware
DynamicTAO
  • Pattern: Service configurator
  • Type: repeatedly tunnable middleware
OpenORB
  • component based design
Squirrel
  • Layer: distribution
  • Type: Tunnable and Mutable - (not repeatedly tunnable tho! tunning happens just once)
MetaSockets
  • Type: repeatedly tunnable middleware
  • Layer: Host infra-structure layer
OpenCorba
  • Type: repeatedly tunnable middleware
  • Layer: Distribution
FlexiNet
  • both fine-grained (per interface) and coarse grained adaptation