- Heterogeneity
- Decomposition
- Streaming dataflow model: the model should be a dataflow graph
- Predictable input rates and patterns: because they use profiling they need to set this constraint
namespace is used to logically define the distribution of the code, i.e., the code that can be distributed, not the code that necessarily needs to be distributed.
if the code to be placed (logically) in a node is stateful, the state of the stateful operators should be replicated on the node too.
Stateful server oprators can not be moved to the network, however, stateful node operators can be brought to the server.
The system considers two modes of conservative and permissive where in the conservative mode the stateful nodes are not pushed to the server but in the permissive mode, the stateful operators can be pushed to the server, in case the application is capable of dealing with data loss.
For dataflow, the Scheme compiler executes the code during the compilation to measure the data flow, producing platform independent data rates.
Once partitoned, the partition is executed within simulated or real hardware to measure the cpu foot print for the partition. Timing statements are placed at the beginning and end of each operation. The timestamp can help with extracting the memory footprint for each piece of the code.
Cost is measured using Cost = aC + bNet
The ILP algorithm used is minimum cost cut for partitioning the graph


