Functional Decomposition

Concurrent Computing

Rajkumar Buyya , ... S. Thamarai Selvi , in Mastering Cloud Computing, 2013

6.2.3.2 Functional decomposition

Functional decomposition is the process of identifying functionally distinct but independent computations. The focus here is on the type of computation rather than on the data manipulated by the computation. This kind of decomposition is less common and does not lead to the creation of a large number of threads, since the different computations that are performed by a single program are limited.

Functional decomposition leads to a natural decomposition of the problem in separate units of work because it does not involve partitioning the dataset, but the separation among them is clearly defined by distinct logic operations. Figure 6.5 provides a pictorial view of how decomposition operates and allows parallelization.

Figure 6.5. Functional decomposition.

As described by the schematic in Figure 6.5, problems that are subject to functional decomposition can also require a composition phase in which the outcomes of each of the independent units of work are composed together. In the case of domain decomposition, this phase often results in an aggregation process. The way in which results are composed in this case strongly depends on the type of operations that define the problem.

In the following, we show a very simple example of how a mathematical problem can be parallelized using functional decomposition. Suppose, for example, that we need to calculate the value of the following function for a given value of x:

f ( x ) = sin ( x ) + cos ( x ) + tan ( x )

It is apparent that, once the value of x has been set, the three different operations can be performed independently of each other. This is an example of functional decomposition because the entire problem can be separated into three distinct operations. A possible implementation of a parallel version of the computation is shown in Listing 6.3.

Listing 6.3. Mathematical Function.

The program computes the sine, cosine, and tangent functions in three separate threads and then aggregates the results. The implementation provided constitutes an example of the alternative technique discussed in the previous sample program. Instead of using a data structure for keeping track of the worker threads that have been created, a function pointer is passed to each thread so that it can update the final result at the end of the computation. This technique introduces a synchronization problem that is properly handled with the lock statement in the method referenced by the function pointer. The lock statement creates a critical section that can only be accessed by one thread at time and guarantees that the final result is properly updated.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124114548000061

Data Organization Practices

Charles D. Tupper , in Data Architecture, 2011

Procedure Definition via Functional Decomposition

Functional decomposition is another fundamental activity that must take place. It is the breakdown of activity requirements in terms of a hierarchical ordering. In order to cover this more fully, some terms and stages must be defined. A function is defined as a continuously occurring activity that exists to meet the needs of the corporation. Within each function are many processes. These processes have a start activity, a process activity, and a termination activity, which completes the process. Each process may or may not be broken down into subprocesses. Each subprocess, like its parent, also has an initiation, an activity state, and a termination, and it differs from the process in that it represents activity at the lowest level. That is, it is the activity or event that takes place at the entity level.

There are multiple ways to formally interpret the functional decomposition diagram. Since it is organized in a hierarchical structure with indentations for each lower level of activity, it is probably easiest to proceed from top to bottom and left to right.

Each function must be documented as to what requirement it fulfills for the corporation and in what business subject area the work is being done. Functions are composed of processes. Each process must also be documented to ensure that the start activity or initiation trigger is defined and under what conditions it happens. It must also be documented to also ensure that the actual activity is documented and what it comprises, and finally the completion or termination step of the process must be defined including the state of the data at the completion of the process.

Within each process are subprocesses, which provide the actual detail operational work on each business entity. The documentation for this must include the event or subprocess trigger, the activity description, and the termination state of the business entities involved. This decomposition is a necessary activity that will define what the data is being used for. The form that these decompositions take is specific to the method used, although they share the same common layouts just defined.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123851260000097

Data Models Across the End-State Architecture

W.H. Inmon , ... Mary Levins , in Data Architecture (Second Edition), 2019

Functional Decomposition and Data Flow Diagrams

In the world of applications, there are the functional decomposition and the data flow diagram.

Fig. 14.1.2 depicts these constructs,

Fig. 14.1.2

Fig. 14.1.2. The application environment.

The functional decomposition is the depiction of the functions that will be achieved by a system. The functional decomposition is laid out in a hierarchical fashion. At the top of the decomposition is the general function of what is to be accomplished by the system. At the second level are the main functions of what is to be accomplished. Then, each second level function is broken down into its subfunctions, until the point of basic functionality is reached.

The functional decomposition is useful to see what the different activities of a system will be. It is useful for organizing the functions, identifying overlap, and checking to see if anything is left out. When you are setting out on a long trip, it is useful for looking at a map of the United States to see what states you will visit and the order in which the states will be traveled.

After the functional decomposition is completed, the next step is to create data flow diagrams for each of the functions. The data flow diagram starts with the input to the module and shows how the input data will be processed to achieve the output data. The three major components of a data flow diagram are an identification of the input, a description of the logic that will occur in the module, and a description of the output.

If the functional decomposition is like a map of the United States, the data flow diagram is like a detailed map of a state. The data flow diagram tells you how to get across Texas. You start at El Paso, you head east, past McKittrick Canyon, go to Van Horn and Sierra Blanca, go through Pecos, then on to Midland and Odessa, and so forth. The map of Texas shows details that the map of the United States cannot show. By the same token, the map of Texas does not show you how to get from Los Angeles to San Jose or from Chicago to Naperville.

The nature of functional decomposition and data flow diagrams are such that process and data are intimately intertwined. Both process and data are needed in order to build a functional decomposition and data flow diagrams.

Fig. 14.1.3 shows the tight interrelationship of data and process in the functional decomposition.

Fig. 14.1.3

Fig. 14.1.3. Process and data are in lock step.

The building of functional decompositions and data flow diagrams are used to define and build applications. As a rule, these constructs can be very complex. One of the tools that are used in order to manage the complexity is that of the definition of the scope of development. At the very beginning, there is an exercise that requires that the scope of the application be defined. The scope definition is necessary in order to keep the size of the development reasonable. If a designer is not careful, the scope will become so large that the system will never be built. Therefore, it is necessary to rigorously define the scope before the development effort ever begins.

The result of the definition of the scope is that—over time—the organization ends up with multiple applications, each of which have their own functional decompositions and data flow diagrams.

Fig. 14.1.4 shows that over time, each application has its own set of definitions.

Fig. 14.1.4

Fig. 14.1.4. Each application has its own functional decomposition and set of data flow diagrams.

While the development process that has been described is normal for almost every shop, there is a problem. Over time, a serious amount of overlap between different applications starts to emerge. Because of the need to define and enforce the definition of the scope of an application rigorously, the same or similar functionality starts to appear across multiple applications. When this happens, there start to appear redundant data. The same or similar data element appears in multiple applications.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128169162000395

Models for Phase B

Philippe Desfray , Gilbert Raymond , in Modeling Enterprise Architecture with TOGAF, 2014

8.4.1 The "functional decomposition diagram" artifact

Description of the artifact

Name Functional decomposition diagram
Experts Executive managers, organizational unit managers
Designers Business analysts, business experts
Recipients Business analysts, business process analysts
Aim To determine the enterprise's essential functions. To be able to subsequently define how these functions can best be carried out
Useful preliminary information Organization of the enterprise; goals requiring evolutions or new functions
Function: Continually takes care of one of the enterprise's missions.

The elements present in this diagram are functions, which can be hierarchically embedded.

In Figure 8.8, functions are organized into layers. Enterprise management, which orients strategy, is found on the top level. Next come operational functions essentially linked to marketing and sales, and finally we have support functions, such as administration and IT.

Figure 8.8. Essential functions of the Discount Travel company.

Functional decomposition is represented here through the graphical embedding of functions. The "Marketing management" function is thus broken down into the "Offer management" function (and other functions), which itself is broken down into the "Portfolio definition" function (and other functions).

Business function

A business function takes care of carrying out one of the enterprise's capacities. The enterprise is described through all its capacities and the services that deliver them. A business function is carried out continuously in order to guarantee one of the enterprise's missions. Unlike a business process, a business function has no specific temporal nature—no identified start or finish, no precisely defined incoming or outgoing products, no trigger events, and so on.

Summarized representation of the enterprise's capacities

Functions are graphically represented through a hierarchical structure. The aim of the functional decomposition diagram is thus to represent, on a single page, all the capacities of an organization that are relevant to the definition of an enterprise architecture. The functional decomposition diagram is not concerned with the "how" (in other words, with the way in which the enterprise carries out its functions). It thus provides a useful abstraction, focusing on what the enterprise must do and not on how it does it.

The construction of a functional decomposition diagram requires knowledge of the enterprise and its missions. Business functions can be demarcated by the business services participating in the function, as well as with the business processes.

Initial models indicating major directions for solutions, designed to help enterprise capacities evolve, can then be constructed to clarify the scope of enterprise architecture work and to orient decisions. For example, a plan for the progressive addition of new capacities can be defined.

The functional decomposition model can be enriched by adding specific links to orient future choices and decisions. For example, these links can indicate which application component supports which function, or which role participates in which function, and so on (see business footprint diagram in Figure 8.9).

Figure 8.9. Business footprint diagram focused on the "Sales" function.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124199842000082

Functional Decomposition and Mereology in Engineering

Pieter Vermaas , Pawel Garbacz , in Philosophy of Technology and Engineering Sciences, 2009

5.1 The reconciled functional basis

A more recent research project that originates with the foundational work of Pahl and Beitz is the Reconciled Functional Basis project. This Reconciled Functional Basis (RFB, Ï•rom now on) is the result of an effort towards establishing a standard taxonomy of basic technical functions (see, e.g., [Hirtz et al., 2002]) by reconciling two previous taxonomies: the NIST taxonomy (cf. [Szykman, et al., 1999]) and the older versions of the Functional Basis (developed in [Little et al., 1997; Stone et al., 1998; McAdams et al., 1999; Stone et al., 1999; Stone and Wood, 2000]). Each of these taxonomies is a result of empirical generalisation of engineering specifications.

RFB analyses the notion of a functional decomposition against the background of its taxonomy of functions, which is based on a taxonomy of flows. RFB modifies the meaning of the term "flow" since here "flow" does not mean "a process of flowing" (e.g., removing debris), but "a thing that flows" (e.g., debris). 34 More precisely speaking, in some papers, e.g., in [Stone and Wood, 2000] this term is used in both meanings, but the RFB taxonomy of flows is based on the latter sense. This shift in meaning is, to be sure, justifiable since it is hard to see how one might differentiate between a process of flowing and a function given the conception of Pahl and Beitz. The RFB whole taxonomy of flows is depicted in Table 2.

Table 2. The RFB taxonomy of flows [Hirtz et al., 2002]

Primary flow Secondary flow Tertiary flow
Material Human
Gas
Liquid
Solid Object
Particulate
Composite
Plasma
Mixture Gas-gas
Liquid-Liquid
Solid-solid
Solid-liquid
Liquid-gas
Solid-gas
Solid-liquid-gas
Colloidal
Signal Status Auditory
Olfactory
Tactile
Taste
Visual
Control Analog
Discrete
Energy Human
Acoustic
Biological
Chemical
Electrical
Electromagnetic Optical
Solar
Hydraulic
Magnetic
Mechanical Rotational
Translational
Pneumatic
Radioactive/Nuclear
Thermal

RFB also contains a three-layer classification of what are called basic functions. Each type of function is accompanied by a definition (in natural language), example, and a set of synonymous names. The basic functions are divided in a first layer into eight primary types. Then, some primary basic functions are divided into types of secondary basic functions, and some of these secondary basic functions are in turn divided into types of tertiary basic functions. The whole taxonomy is depicted in Table 3.

Table 3. The RFB taxonomy of functions [Hirtz et al., 2002]

Primary functions Secondary functions Tertiary functions
Branch Separate Divide
Extract
Remove
Distribute
Channel Import
Export
Transfer Transport
Transmit
Guide Translate
Rotate
Allow degree(s) of freedom
Connect Couple Join
Link
Mix
Control magnitude Actuate
Regulate Increase
Decrease
Change Increment
Decrement
Shape
Condition
Stop Prevent
Inhibit
Convert Convert
Provision Store Contain
Collect
Supply
Signal Sense Detect
Measure
Indicate Track
Display
Process
Support Stabilize
Secure
Position

Of course, the RFB taxonomy of basic functions is not a model of functional decomposition. For instance, the fact that Divide and Extract are subtypes of Separate does not mean that the former are subfunctions of the latter. Moreover the basic functions are not functions in the sense the overall functions are, since the overall functions are (complex) modifications of specific input flows into specific output flows, whereas the basic functions are modifications generalised for the flows subjected. Hence, the basic subfunctions are in the RFB to be identified with basic functions operating on specific primary, secondary and tertiary flows.

In RFB a functional decomposition is a conceptual structure that consists of an overall function that is decomposed, its subfunctions into which the overall function is decomposed, the flows which are modified by the subfunctions, and a net that links these modifications in an ordered way.

The overall function to be decomposed is defined in terms of the flows it modifies, which are taken from the RFB taxonomy of flows. Each of its subfunctions is defined both in terms of the flows the respective subfunction modifies and in terms of its type of modification, which is taken from RFB taxonomy of basic functions. For instance, the overall function of a screwdriver, which is to tighten/loose screws, is defined by means of the following ten input flows and nine output flows (see also Figure 3).

Figure 3. The RFB modelling of the overall function of a screwdriver [Stone and Wood, 2000, Fig. 2]

input flows for the function tighten/loose screws:

energy flows: electricity, human force, relative rotation and weight;

material flows: hand, bit and screw;

signal flows: direction, on/off signal and manual use signal;

output flows for the function tighten/loose screws:

energy flows: torque, human force, heat, noise and weight;

material flows: hand, bit and screw;

signal flows: looseness/tightness.

On the other hand, one of the subfunctions in the functional decomposition of this overall function tighten/loose screws is called convert electricity to torque (see Figure 4), which means that it is a function of the convert-type (cf. Table 3), and modifies one input flow to three output flows:

Figure 4. The RFB functional decomposition of a screwdriver [Stone and Wood, 2000, Fig. 4]

input flows for the subfunction convert electricity to torque:

energy flows: electricity;

material flows: none;

signal flows: none.

output flows for the subfunction convert electricity to torque:

energy flows: heat, noise and torque;

material flows: none;

signal flows: none.

The task of a designer who performs a functional decomposition is to link any input flow of the overall function to be decomposed with some of the output flows. Any such link that starts with an input flow of the overall function and ends with one of its output flows is called a function chain. In RFB one distinguishes between two types of function chains: sequential and parallel. A function chain is sequential if it is ordered with respect to time, i.e., if any temporal permutation of its subfunction may in principle result in failing to perform the overall function. A parallel function chain is a fusion of sequential function chains that share one or more flows.

In RFB one assumes that each subfunction of an overall function to be performed by a technical system S is realised by a component of S; however, the relation between subfunctions and components is many-to-many, i.e., one sub-function may be realised by several components and one component may realise more than one subfunction.

The notion of functional decomposition developed within RFB plays an important role in what is called the concept generator, which is a web-based computational tool for enhancing conceptual design. 35 The concept generator is to present a designer with a number of different solutions to his or her design problem on the basis of previously developed (and stored) high-quality designs. One of the input data to be provided for this tool is a function chain for a product to be newly developed. The output solutions describe the design solution in terms of the technical systems whose descriptions are loaded into the knowledge base of the concept generator. The functional decomposition links the overall function established by the generator with the conceptual components that compose a general description of the product that is construed here as a solution of the initial design problem [Strawbridge et al., 2002; Bryant et al., 2004].

The RFB proposal adds precision and a wealth of empirical details to the methodology of Pahl and Beitz. Its explicit aim to contribute to the standardisation of conceptual models in engineering makes it even more valuable for specifically mereological analysis of functional modelling.

In our terminology, the overall function of an RFB functional decomposition Decomp(Φ, Org(Ï•1, Ï•2, …, Ï• n )) may be any function Φ but the subfunctions Ï•1, Ï•2, …, Ï• n are to be identified with RFB basic functions from Table 3 operating on specific RFB primary, secondary and tertiary flows from Table 2. The net of flows between the subfunctions Ï•1, Ï•2, …, Ï• n defines their organisation Org(Ï•1, Ï•2, …, Ï• n ).

In RFB the overall functions Φ and the subfunctions Ï•1, Ï•2, …, Ï• n in functional decompositions Decomp(Φ, Org(Ï•1, Ï•2, …, Ï• n )) may be describing systems S and s 1, s 2, …, s n that are endurants and perdurants, but like in the methodology of Pahl and Beitz, again the additional assumptions are made that functions comply with physical conservation laws for flows, and that the subfunctions Ï•1, Ï•2, …, Ï• n , are to be taken from a set of basic functions. A further additional assumption seems to be that the functional orderings Ï• i → Ï•j making up the organisations Org(Ï•1, Ï•2, …, Ï• n ) of the subfunctions, are always asymmetric: flows between two subfunctions in functional decompositions like depicted in Figure 4, always go in one direction. The benefit of philosophical research on functional descriptions to engineering can again lie in making these assumptions explicit and in challenging them. The requirement that functions always have to be decomposable into RFB basic functions operating on specific RFB flows introduces again a tension between the goal of functional decomposition to facilitate designing and to facilitate communication. Consider, Ï•or instance, the basic function convert acoustic energy in electrical energy. The identification of this basic function in a decomposition of an overall function may be useful to a shared understanding of this overall function but will not help designers to easily find a corresponding design solution. A requirement that subfunctions are only ordered in one direction may in turn be helpful in engineering for managing the flow of materials, energies and signals, but may also be revealed to be an unnecessary constraint to the decomposition of functions.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444516671500148

27th European Symposium on Computer Aided Process Engineering

Xinsheng Hu , ... Gürkan Sin , in Computer Aided Chemical Engineering, 2017

2.1 MFM modeling

MFM is used to simulate the chemical process by functional decomposition using symbols. The symbols, as seen in Figure 2, express the functional action types and logical action sequences in the chemical process and enable modeling at different abstraction levels. According to the functional decomposition based on the first engineering principles and first operational principles, the plant can be divided into functional nodes. Then by following the syntax of MFM, which can be learned by the studies (Lind, 2013; Wu et al. 2014), the simulation of the plant can be performed by using the MFMSuite platform.

Figure 2

Figure 2. The basic MFM symbols

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444639653501008

*Constructive Induction

Igor Kononenko , Matjaž Kukar , in Machine Learning and Data Mining, 2007

Functional decomposition

A context of other attributes can be explicitly utilized by the method of functional decomposition . From the original target function (learning problem) functional decomposition builds a hierarchy of learning problems. By using constructive induction it defines intermediate problems that correspond to new attributes. Such an intermediate step of functional induction is best illustrated by an example.

Table 8.1 shows a three-class learning problem with attributes A 1, A 2 in A 3. Attributes A 1 and A 2 have each three possible values, whereas the attribute A 3 has only two. After building a Cartesian product of attributes A 2 and A 3, we can, by using the context of the attribute A 1, join the values of original attributes into a new attribute A 2,3 as shown in Table 8.2. A new attribute changes the learning problem as shown in Table 8.3. In this case constructive induction has constructed a new attribute A 2,3 as the minimum of the original attribute values. The original learning problem is now transformed to calculating the maximum of attributes A 1 and A 2,3. This example clearly illustrates how constructive induction based on Cartesian products can define a useful new operator which is a result of the learning process and thus a part of the generated knowledge.

Table 8.1. A simple three-class learning problem with three attributes and 11 learning examples.

A 1 A 2 A 3 C
1 1 1 1
1 1 3 1
1 2 1 1
1 2 3 2
1 3 1 1
1 3 3 3
2 2 1 2
2 3 1 2
2 3 3 3
3 1 1 3
3 3 1 3

Table 8.2. Construction of a new attribute by joining values of two original attributes from Table 8.1. The new attribute can be explained as the minimum of the original attributes' values.

A 2 A 3 A 2,3
1 1 1
1 3 1
2 1 1
2 3 2
3 1 1
3 3 3

Table 8.3. A modified learning problem from Table 8.1; after joining two attributes the new learning problem is calculating the maximum of attributes A 1 and A 2,3.

A 1 A 2,3 C
1 2 2
1 3 3
1 1 1
2 3 3
2 1 2
3 1 3

When MDL or 1 – D measure are used for joining values of Cartesian products, they completely ignore the context of other attributes. On the other hand, if ReliefF measure is used, the context is implicitly included. Functional decomposition explicitly accounts for the context within the partitioning matrix. The partitioning matrix is an alternative representation of the learning set. Its columns correspond to the values of Cartesian products of attributes being joined. Its rows correspond to the values of Cartesian products of other attributes. The values in the partitioning matrix correspond to class labels (or more generally, to distributions of class labels). A partitioning matrix for learning examples from Table 8.1 is shown in Table 8.4. Within the partitioning matrix we search for compatible or almost compatible columns. Two columns are compatible either if they are identical or if any mismatch occurs where one column has an empty value (-). The last row in the partitioning matrix names each column with a value of the new attribute, all compatible columns having the same name. The smaller the set of compatible columns is, the fewer values the new attribute will have.

Table 8.4. A partitioning matrix for learning examples from Table 8.1.

A 2 1 1 2 2 3 3
A 1 A 3 1 3 1 3 1 3
1 1 1 1 2 1 3
2 - - 2 - 2 3
3 3 - - - 3 -
A 2,3 1 1 1 2 1 3

For joining the values of the Cartesian product in noise-free problems, the complexity criteria that minimize the number of new attribute values can be used. For real-world noisy data (when a more general scenario for joining partially compatible columns is used), it is more advisable to use robust criteria that minimize the classification error.

The problem of searching for optimal constructs is of combinatorial nature: it is not known in advance how many and which attributes are to be joined. Since exhaustive search is obviously out of question, heuristic approaches are frequently used for this purpose. Here, the non-myopic algorithm ReliefF that estimates the attribute quality in the context of other attributes can serve as a useful tool. We can observe the difference between attribute quality estimations obtained with a non-myopic and a myopic (Eq. 6.19) ReliefF. If, for a particular attribute, this difference is large, this means that the attribute carries information that, in combination with other attributes, can yield positive interaction information. Such an attribute is therefore a potentially good candidate for constructive induction methods.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781904275213500083

Objects and classes: the basic concepts

Carol Britton , Jill Doake , in A Student Guide to Object-Oriented Development, 2005

Functional decomposition.

For many years, software systems were developed using a structured approach based on functional decomposition. This meant that developers decomposed and then constructed the system according to the main areas of activity – in other words, the subsystems that were identified corresponded directly to tasks that the system had to carry out. For example, in a bike hire system, such as the one used in this book, the system would probably be based on subsystems or processes dealing with issuing a bike, returning a bike, maintaining records for bikes and for customers etc. Each of these processes would perform a separate function, and the data about bikes, transactions and customers would be passed freely between them. In functional decomposition, there was a clear separation between data and process, and data items could frequently be accessed by any part of the program. Figure 4.1 shows data about bikes being transferred between a data store that holds records about bikes and the processes that deal with issuing and returning bikes to customers. This is a potential source of problems, because the bike details are accessible to other parts of the system with no protection from processes that may access and modify them in error.

Figure 4.1. In functional decomposition data (here details about bikes) flows unprotected round the system

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780750661232500049

Paradigms for Developing Cloud Applications

Dinkar Sitaram , Geetha Manjunath , in Moving To The Cloud, 2012

Example: Partitioning the Pustak Portal Data

To partition the Pustak Portal data shown in the previous case study, a combination of functional decomposition and sharding is used. The alternatives are discussed shortly, together with code fragments for implementing these alternatives. From the discussion that follows, it should be clear that there is no unique "best" partitioning alternative, and that the alternative chosen is strongly dependent upon the application (i.e., the queries that would be made against the database).

First, functional decomposition can be used to store the customer data, the transaction data, and the book inventory data in separate databases, and then shard each database separately (similar to the configuration in Figure 5.2). A simple scheme for further scaling is to shard the customer data and transaction data on Customer_Id (by, for example, hashing the Customer_Id). Customer_Id is selected as the sharding attribute because the assumption is that most of the online transactions would be related to individual customers (such as finding the status of a recent order, or updating the customer's profile). Other transactions (e.g., finding total sales of a book) are offline transactions where speed is not essential; therefore minimizing the response time for such a transaction is not essential (though the query should run efficiently). In that case, as stated previously, sharding the transaction database on the Customer_Id retains associativity, so that queries such as finding the outstanding orders for a customer need not span multiple servers, and hence reduces response time.

Before this sharding method can be implemented, one problem has to be solved. This problem is: sometimes, the Transaction_Id may be given, and since the transaction tables are sharded on Customer_Id,it is necessary to find the Customer_Id from the Transaction_Id. For example, a book may have been shipped to the customer, and it may be desired to notify the customer via an email that the book has shipped. The software that tracks the status of the order may send a message to an email module with the Transaction_Id of the order that just shipped. It is not possible to look up the transaction table to find the Customer_Id, since the transaction table is sharded on Customer_Id, so the shard to which the query is to be sent is unknown! This problem can be solved by modifying the transaction table as shown in Table 5.4. Here, the Transaction_Id has been decomposed into a pair (Transaction_Num, Customer_Id) which form a composite key for the table. The Transaction_Num can be some number which uniquely identifies the transaction for this customer, such as the seconds since a particular date or a randomly generated number. Thus it can be seen that the sharding strategy may have an impact on the tables chosen.

Table 5.4. Transaction Table Modified for Sharding

Transaction_Id Book_Id Sale_Price
Transaction_Num Customer_Id
6732 38876 99420202 $11.95

CODE TO INITIALIZE CONNECTION TO TRANSACTION DATABASE SHARDS

import java.sql.*

class transDBs {

  public static final int NUM_TRANS_SHARD = 10;

  String dburl1 = "jdbc:mysql://transDB"   //First part of DB URL

  String dburl2 = ":3306/db"; //Second part of DB URL

  Connection[] transDBConns; // Array of transaction DB connections

  /* Return connection to transaction db shard for Customer_id */

  public Connection getTransShardConnection (int Customer_id) {

  return (transDBConns [Customer_id % NUM_DB]);

  }

  /* Load JDBC driver */

  Class.forName ("com.mysql.jdbc.Driver").newInstance();

  /* Initialize transaction DB shard connections */

  transDBConns = new Connection [NUM_DB];

  for (int i=0; i<NUM_DB; i++) {

  String dburl;

  /* transDBConns[0] points to jdbc:mysql://transDB0:3036/db */

  /* and so on */

  dburl = dburl1 + new Integer (i).toString() + dburl2;

  try {

  transDBConns[i] = DriverManager.getConnection (dburl, userid, pwd);

  } catch (Exception e) {

  e.PrintStackTrace();

  }

  } // for

} //transDBs

The preceding example code can be used to implement sharding in the transaction database. It is assumed that the database is sharded into NUM_TRANS_SHARD shards. The class transDBs maintains an array transDBConns of connections to the various database shards. The method getTransShardConnection can be used to get a connection to the database shard for a customer with a particular Customer_Id. Queries can then be performed against a shard as given in the next code sample, which shows how to retrieve all the transactions for a customer (assuming that the Customer_Id is a secondary index into the transaction table). The statement starting transDBConn = gets a connection to the shard for a particular customer, and the subsequent stmt.executeQuery statement executes a query against the shard.

Executing a Query to a Transaction Database Shard

  Connection transDBConn; // Connection to transaction DB shard

  Statement stmt; // SQL statement

  ResultSet resset;

  transDBConn = transDBs.getTransShardConnection (Customer_Id);

  stmt = transDBConn.createStatement();

  resset = stmt.executeQuery ("SELECT * FROM transTable WHERE custID=" + new Integer (Customer_Id).toString());

A more sophisticated method can be used if the customer base is geographically distributed. Assume that we can use the Address field to extract the continent that the customer lives in, and that Pustak Portal has servers in each continent. In that case, it may be useful to direct each customer's queries to a server in the continent the customer lives in. This can be achieved by hashing on both the continent as well as the Customer_Id as sharding attributes. For example, if the shard number is 3 digits (such as 342), the continent can be used to select the first digit of the number, and the Customer_Id to select the second two digits.

An intuitive method for sharding the inventory data is to use Book_Id as the sharding attribute. This would allow querying a single server to find all the warehouses in which a book is present, and to direct orders to the nearest warehouse to the customer. However, this would imply that some customer interactions, such as checkout, would span multiple shards. This is because when a customer checks out, the inventory of each book ordered would have to be updated, and this would generally span multiple servers, since the sharding is by Book_Id.

The need to update multiple servers upon checkout can be avoided by sharding in the following way. Assume there is a warehouse inventory management system under which a warehouse would have a very high (say 95%) probability of containing books wanted by customers who live close to it. Under that assumption, it is possible to shard by Warehouse_Id. When a customer checks out, there is a very high probability that all books ordered by the customer are in the nearest warehouse, so the transaction to update the inventory is very likely to involve only one server. If the book is not found in the nearest warehouse, the action taken depends on the inventory management system. For example, if there is a master warehouse that has copies of all books, the master warehouse can be queried.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781597497251000056

Best Practices in Spacecraft Development

Chris Hersman , Kim Fowler , in Mission-Critical and Safety-Critical Systems Handbook, 2010

2.2.2 Interface Management

Interface management encompasses the definition, documentation, and control of system interfaces. As part of the functional decomposition of requirements, interfaces are defined and optimized. Interface boundaries depend on how the requirements are flowed down to subsystems. Once the interfaces are established, all aspects of the interface are documented in interface control documents. The types of information contained in interface control documents include both mechanical and electrical interfaces: physical footprint, thermal environment, vibration environment, field of view requirements, power requirements, command and telemetry formats, and electromagnetic compatibility requirements. Throughout the development, accurate documentation of interfaces is critical to the interface control function.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780750685672000056