Inductive Logic

Ronald Ortner , Hannes Leitgeb , in Handbook of the History of Logic, 2011

The Bridge to Neural Network Semantics

Interpreted dynamical systems — the paradigm instances of which are artificial neural networks that come with a logical interpretation — may also be used to yield a semantics for nonmonotonic conditionals. Here are some relevant references: d'Avila Garcez, Lamb, and Gabbay [2008] give a general overview of connectionist non-classical logics, including connectionist (i.e. neural networks-related) nonmonotonic logic, as well as lots of references to their own original work. Balkenius [1991], Blutner [2004], and Leitgeb [2001], [2004], [2005] are important primary references. The main idea behind all of these theories is that if classical logic is replaced by some system of nonmonotonic reasoning, then a logical description or characterization of neural network states and processes becomes possible. The following exposition will introduce Leitgeb's approach which yields a neural network semantics for KLM-style systems; the presentation will follow the more detailed introduction to neural network semantics for conditionals in Leitgeb [2007].

The goal is to complement the typical description of neural networks as dynamical systems by one according to which cognitive dynamical systems have beliefs, draw inferences, and so forth. Hence, the task is to associate states and processes of cognitive dynamical systems with formulas . Here is what we will presuppose: We deal with discrete dynamical systems with a set S of states. On S, a partial order ≤ is defined, which we will interpret as an ordering of the amount of information that is carried by states; so ss′ will mean: s′ carries at least as much information as s does. We will also assume that for every two states s and s′ there is a uniquely determined state sup(s,s′) which (i) carries at least as much information as s, which also (ii) carries at least as much information as s′, and which (iii) is the state with the least amount of information among all those states for which (i) and (ii) hold. Formally, such a state sup(s,s′) is the supremum of s and s′ in the partial order ≤. Finally, an internal next-state function is defined for the dynamical system, where this next-state function is meant to be insensitive to possible external inputs to the system; we will introduce inputs only in the subsequent step.

In this way, we get what is called an 'ordered discrete dynamical system' in Leitgeb [2005]:

DEFINITION 55. An ordered discrete dynamical system is a triple S = 〈S, ns, ≤〉, such that:

1.

S is a non-empty set (the set of states).

2.

ns : SS (the internal next-state function).

3.

≤ ⊆ S × S is a partial order(the information ordering)on S, such that for all s, s′ ∈ S there is a supremum sup(s,s′) ∈ S with respect to ≤.

In case an artificial neural network is used, the information ordering on its states, i.e. on its possible patterns of activation, can be defined according to the following idea: the more the nodes are activated in a state, the more information the state carries. Accordingly, sup(s, s′) would be defined as the maximum of the activation patterns that correspond to s and s′; in such a case one might also speak of sup(s, s′) as the "superposition of the states s and s′". (But note that this is just one way of viewing neural networks as ordered systems.) The internal dynamics of the network would be captured by the next-state mapping ns that is determined by the pattern of edges in the network.

Next, we add external inputs which are regarded to be represented by states s S and which are considered to be fixed for a sufficient amount of time. The state transition mapping F s * can then be defined by taking both the internal nextstate mapping and the input s into account: The next state of the system is given by the superposition of s with the next internal state ns(s), i.e.:

F s * ( s ) : = sup ( s * , ns ( s ) )

The dynamics of our dynamical systems is thus determined by iteratively applying F s * to the initial state. Fixed points sstab of F s * are regarded to be the "answers" which the system gives to s , as it is common procedure in neural network computation. Note that in general there may be more than just one such stable state for the state transition mapping F s * that is determined by the input s (and by the given dynamical system), and there may also be no stable state at all for F s * : in the former case, there is more than just one "answer" to the input, in the latter case there is no "answer" at all. The different stable states may be reached by starting the computation in different initial states of the overall system.

Now formulas can be assigned to the states of an ordered discrete dynamical system. These formulas are supposed to express the content of the information that is represented by these states. For this purpose, we fix a propositional language L . The assignment of formulas to states is achieved by an interpretation mapping J . If φ is a formula in L , then J ( φ ) is the state that carries exactly the information that is expressed by φ, i.e. neither less nor more than what is expressed by φ. So we presuppose that for every formula in L there is a uniquely determined state the total information of which is expressed by that formula. If expressed in terms of belief, we can say that in the state J ( φ ) all the system believes is that φ, i.e. the system only believes φ and all the propositions which are contained in φ from the viewpoint of the system. (This relates to Levesque's [1990] modal treatment of the 'all I know' operator.) We will not demand that every state necessarily receives an interpretation but just that every formula in L will be the interpretation of some state. Furthermore, not just any assignment whatsoever of states to formulas will be allowed, but we will additionally assume certain postulates to be satisfied which will guarantee that J is compatible with the information ordering that was imposed on the states of the system beforehand. An ordered discrete dynamical system together with such an interpretation mapping is called an 'interpreted ordered system' (cf. Leitgeb [2005]). This is the definition in detail:

DEFINITION 56. An interpreted ordered system is a quadruple S J = S , ns , , J , such that:

1.

S, ns, ≤〉 is an ordered discrete dynamical system.

2.

J : L S (the interpretation mapping) is such that the following postulates are satisfied:

(a)

Let T H J = { φ L | for all ψ L : J ( φ ) I ( ψ ) } : then it is assumed that for all φ , ψ L : if T H J φ ψ , then J ( ψ ) J ( φ ) .

(b)

For all φ , ψ L : J ( φ ψ ) = sup ( J ( φ ) , J ( ψ ) ) .

(c)

For every φ L : there is an J ( φ ) -stable state.

(d)

There is an J ( ) -stable state sstab , such that J ( ) s stab .

We say that S J satisfies the uniqueness condition if for every φ L there is precisely one J ( φ ) -stable state.

E.g., postulate 2b expresses that the state that belongs to a conjunctive formula φ∧ψ ought to be the supremum of the two states that are associated with the two conjuncts φ and ψ: this is the cognitive counterpart of the proposition expressed by a conjunctive sentence being the supremum of the propositions expressed by its two conjuncts in the partial order of logical entailment. For a detailed justification of all the postulates, see Leitgeb [2005].

Finally, we define what it means for a nonmonotonic conditional to be satisfied by an interpreted ordered system. We say that a system satisfies φ ⇒ ψ if and only if whenever the state that is associated with φ is fed into the system as an input, i.e. whenever the input represents a total belief in φ, the system will eventually end up believing ψ in its "answer states", i.e. the state that is associated with ψ is contained in all the states which are stable with respect to this input. Collecting all such conditionals φ ⇒ ψ which are satisfied by the system, we get what we call the 'conditional theory' that corresponds to the system.

DEFINITION 57. Let S J = S , ns , , J be an interpreted ordered system:

1.

S J φ ψ iff for every J ( φ ) -stable state s stab : J ( ψ ) s stab .

2.

T H ( S J ) = { φ ψ | S J φ ψ } (the conditional theory corresponding to S J ).

Leitgeb [2005] proves the following soundness and completeness theorem:

THEOREM 58.

Let S J = S , ns , , J be an interpreted ordered system which satisfies the Uniqueness Assumption:

Then T H ( S J ) is a consistent conditional C-theory extending T H J .

Let T H be a consistent conditional C-theory extending a given classical theory T H :

It follows that there is an interpreted ordered system S J = S , ns , , J , such that T H ( S J ) = T H , T H J T H , and S J satisfies the uniqueness condition.

These results can be extended into various directions. In particular, some interpreted ordered systems can be shown to have the property that each of their states s may be decomposed into a set of substates si which can be ordered in a way such that the dynamics for each substate si is determined by the dynamics for the substates s 1,s 2,…,si −1 at the previous point of time. Such systems are called 'hierarchical' in Leitgeb [2005]. We will not go into any details, but one can prove soundness and completeness theorems for such hierarchical interpreted systems and the system CL. In Leitgeb [2004] further soundness and completeness theorems are proved for more restricted classes of interpreted dynamical systems and even stronger logical systems for nonmonotonic conditionals in the KLM tradition.

As it turns out, if artificial neural networks with an information ordering are extended by an interpretation mapping along the lines explained above, then they are special cases of interpreted ordered systems; moreover, if the underlying artificial neural network consists of layers of nodes, such that the layers are arranged hierarchically and all connections between nodes are only from one layer to the next one, then the corresponding interpreted ordered system is a hierarchical one. Thus, various systems of nonmonotonic logic are sound and complete with respect to various types of neural network semantics. However, so far these results only cover the short-term dynamics of neural networks that is triggered by external input and for which the topology of edges and the distribution of weights over the edges within the network is taken to be rigid. The long-term dynamics of networks given e.g. by supervised learning processes which operate on sequences of input-output pairs is still beyond any logical treatment that is continuous with KLM-style nonmonotonic reasoning. So, the inductive logic of learning, rather than inference, within neural networks is still an open research problem (see Leitgeb [2007] for a detailed statement of this research agenda).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444529367500185

Modeling the Stochastic Nature of Gene Regulation With Boolean Networks

David Murrugarra , Boris Aguilar , in Algebraic and Combinatorial Computational Biology, 2019

5.6.2.1 Transition Probabilities

The application of an action a results in a new SDDS, F ^ a = { F k a , p k , p k } k = 1 n . For each state-action pair (x, a), the transition probability P x , y a from x to state y upon execution of a is computed using Eq. (5.2) with f k replaced by F k a , that is,

P x , y a = k = 1 n P r o b F k a ( x k y k ) ,

where P r o b F k a ( x k y k ) is the probability that x k will change its value under F k a .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128140666000052

Application of pole decomposition to an equation governing the dynamics of wrinkled flame fronts

O. Thual , ... M. Hénon , in Dynamics of Curved Fronts, 1988

5 Discussion.

We have found that the pole decomposition provides a reduction of the Sivashinsky equation to a discrete Dynamical System. The latter admits steady state solutions with all the poles aligned parallel to the imaginary axis. In the spatially periodic case the maximum number N of complex conjugate pairs of poles in an alignment is such that v(2 N − 1) < 1. We conjecture more general results for analytic periodic initial data: (i) as t → ∞ all the singularities (poles and others) are pushed off to infinity, except a finite number N of pairs of poles satisfying the above inequality; (ii) for real times poles stay uniformly bounded away from the real axis. The former result has been recently established for the case of initial conditions with a finite but arbitrary number of poles [13, 14]. In this context we also mention a result of Foias, Nicolaenko, Sell and Temam [17] concerning the family of equations

(5.1) t u + u x u = M ɛ u ,

where the operator Mɛ has the following representation in Fourier space

(5.2) M ɛ : u ˆ k [ ɛ ( | k | k 2 ) + ( 1 ɛ ) ( k 2 k 4 ) ] u ˆ k .

It is assumed that 0 ≤ ɛ ≤ 1; ɛ = 1 is the Sivashinsky equation. For the case of space-periodic odd solutions of equation (5.2) Foias et al. [15] have shown that when t → ∞ the solutions have a finite dimensional attractor imbedded in a finite dimensional manifold.

In the periodic case with small v we found that the poles on a vertical alignment condense into a ln coth distribution the signature of which in real physical space is a cusp in the flame front. This suggests a real singularity. Actually, as we have seen in section 4 there are only complex poles, but the ones nearest to the real axis are at a distance O(v/ln v −1) so that the cusp is slightly rounded. It is noteworthy that this distance is smaller by a factor 1/ln v −1 to what would have been inferred from a naive analysis based on the observation that wavenumbers in excess of, v −1 are linearly stable. This discrepancy is important when attempting to numerically solve the Sivashinsky equation : the number of Fourier modes that are necessary scales like v −1 ln v −1 and not like v −1. Otherwise the numerics will work well as long as the poles stay within O(v) of the real axis (as in the two-pole solution); after some time however the poles will start piling up vertically and spurious singularities may be observed [4].

We finally mention an open problem. We have shown that the Sivashinsky equation has stable steady solutions, which can be made trivially time-dependent by a Galilean transformation. There may also be non-trivial time-dependent solutions. In one spectral simulation with 25 linearly unstable modes (v = 1/25) we have observed a complicated time-dependent regime going through a succession of single and multi-wrinkle configurations. At the moment we cannot rule out ever-lasting, possibly chaotic, time-dependent solutions [16].

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080925233500496

A Different Approach to Large-Eddy Simulation with Advantages for Computing Turbulence-Chemical Kinetics Interactions

J.M. McDonough , in Parallel Computational Fluid Dynamics 2002, 2003

5 CONCLUSIONS

In this paper we have presented an alternative approach to large-eddy simulation based on unfiltered equations of motion and discrete dynamical systems SGS models. We have noted the advantages of this for modeling interactions of hydrodynamic turbulence with other physical fluctuations, and we have demonstrated the effectiveness of the individual pieces of the overall procedure by comparing them with standard LES in the case of the filtering approach used, and with experimental data in the case of the SGS models. Both comparisons showed very satisfactory agreement between the new approach and the data employed. We have also noted that the form of algorithm being proposed is highly parallelizable with a hierarchy of at least three levels of parallelization possible.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444506801500048

Handbook of Dynamical Systems

Claude Bardos , Basile Nicolaenko , in Handbook of Dynamical Systems, 2002

Theorem 5.6

If there exists λ ∈(0,1) such that D x S ( x ) L λ ( E ) for all x ∈ X , then the discrete dynamical system { S n } n = 1 possesses an exponential attractor.

Define S as the map induced by PoincareA sections of a Lipschitz continuous semiflow S(t), t ⩾ 0, at the time t = T* for some T* > 0; that is, S := S(T*). We consider the discrete semigroup { S n } n 0 generated by S. Once the existence of exponential attractors for the discrete case is proved, the result for the continuous case follows in a standard manner (e.g., see [67]). We have

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1874575X02800327

Geometric Function Theory

Frederick P. Gardiner , William J. Harvey , in Handbook of Complex Analysis, 2002

1.4 Dynamical systems and deformations

By definition, universal Teichmüller space T is QS factored by the closed subgroup of Möbius transformations that preserve the unit disc. It is universal in the naive sense that it contains the deformation spaces of nearly all one-dimensional dynamical systems F that act on the unit circle. When we say this, we have in mind the following two types of dynamical systems. F is either a Fuchsian group acting on the unit circle or a C 2-homeomorphism acting on the unit circle. In the second situation, it is often useful to assume F has irrational rotation number. The theory is already complicated when F is a diffeomorphism and becomes even more so if F is allowed to have one critical point. Under smooth changes of coordinate, we may assume F maps an interval on the real axis to another interval on the real axis and maps the origin to a point c. We also assume that in suitable smooth coordinates F takes the form of a power law:

F ( x ) = | x | α sign ( x ) + c ,

for some constant α > 1.

The deformation space T (F) (sometimes called the Teichmüller space of F), is defined to be the space of equivalence classes of quasisymmetric maps hQS such that hFh −1 is a dynamical system of the same type. Two maps h 0 and h 1 are equivalent if there is a Möbius transformation A such that Ah 0 = h 1. Thus, it is the set of quasisymmetric conjugacies to dynamical systems of the same type factored by this equivalence relation.

In the case that F is a Fuchsian group with generators γ j this means that, for each j, the conjugate hγ j h −1 is also a Möbius transformation preserving the unit circle. In the case F is a C 2 homeomorphism, possibly with a power law, this means hFh −1 also a C 2-homeomorphism. If h is itself a Möbius transformation, we consider the dynamical systems generated by F and by hFh −1 as not differing in any essential way. For this reason, we view T(F) as a subspace of QS mod P S L ( 2 , R ) . That is, two elements h 1 and h 2 are considered equivalent if there is a Möbius transformation A such that Ah 1 = h 2.

It turns out that the factor space T = QS mod P S L ( 2 , R ) carries in a natural way the structure of a complex manifold, as do the subspaces T(F) for many dynamical systems F. Even the statement that T(F) is connected is already significant, and knowledge of geometrical properties of curves which join pairs of points in T(F) can have dynamical consequences.

Here we explain why conjugacies that allow distortion of eigenvalues cannot be smooth: therefore, to obtain interesting conjugacies one must expand into the quasisymmetric realm.

Lemma 2

Let F 0 and F 1 be two discrete dynamical systems acting on the real axis, generated by xγ 0(x) = λ0 x and by xγ 1(x) = λ1 x, respectively, and assume 1 < λ0 < λ1. Let h be a conjugacy, so that hγ 0h −1 = γ 1. Then h can be at most Hölder continuous with exponent α = log λ0/log λ1.

Proof

Because h ( λ 0 n x ) = λ 1 n h ( x ) , by plugging in x = 1 and letting n approach −∞ and ∞,one sees that h must fix 0 and ∞. By postcomposition of h with a real dilation, we may assume h(1) = 1 and this implies h ( λ 0 n ) = λ 1 n . But any such map taking these values for arbitrarily large negative values of n cannot satisfy an inequality of the form |h(x)| ⩽C|x| α unless α ⩽ log λ0/log λ1.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1874570902800166

Special Volume: Computational Methods for the Atmosphere and the Oceans

Eric Simonnet , ... Michael Ghil , in Handbook of Numerical Analysis, 2009

2.2 Local codimension−1 bifurcations of limit cycles

One may ask whether it is possible to apply the procedures described above to more complex (limit) sets. A very similar discussion applies for bifurcation of limit cycles although there are some additional complications. Let us assume that one has a limit cycle γ of the original system (Eq. (2.1) ) for a parameter p ¯ that we omit in the notations for simplicity and whose corresponding solution is x ¯ (t) = x ¯ (t+ T). We consider an infinitesimal perturbation ξ(t) of γ, i.e., we let x(t) = x ¯ (t) + ξ(t) in Eq. (2.1), and neglecting quadratic terms, one obtains

(2.13) ξ · = J ( X ¯ ( t ) ) ξ ,

and J( x ¯ (t)) is now a T-periodic matrix. It can be shown that the fundamental solution matrix X of the system (Eq. (2.13) ) can be written as X(t) = Y(t)e tR, where Y(t+ T) = Y(t). One thus obtains that

(2.14) X ( t + T ) = M X ( t ) , M = e T .

The matrix Mis called the monodromy matrix, and its eigenvalues σ1,…, σ n are called the Floquet multipliers. The monodromy matrix is not uniquely determined by the solutions of Eq. (2.13) but its eigenvalues are uniquely determined. Since the perturbation ξ(t) = x ¯ (t+ ∈) - x ¯ (t), ∈ small, is T-periodic, it immediately implies that Mhas an unit eigenvalue, i.e., perturbations along γ neither diverge or converge. The linear stability of γ is thus determined by the remaining n -1 eigenvalues.

Let Σ be a (fixed) local cross-section of dimension n -1 of the limit cycle γ such that the periodic orbit is not tangent to this hypersurface and denote x* the intersection of Σ with γ. There is a nice geometrical interpretation of the monodromy matrix in term of the PoincaréA map defined as P(x) = ϕτ(x), where xis assumed to be in a neighborhood of x* and τ is the time taken for the orbit ϕ t (x) to first return to Σ (as xapproaches x*, τ will tend to T). After a change of basis such that the matrix Mhas a column (0,0, 1) T corresponding to the unit eigenvalue, the remaining block (n -1) × (n -1) matrix corresponds to the linearized PoincaréA map.

These remarks show that the bifurcations of limit cycles are related to the behavior of a discrete dynamical system (the PoincaréA map)

(2.15) x n + 1 = P x n ,

rather than a continuous dynamical system like in the case of fixed points. The bifurcation theory for fixed points of the iterative map with eigenvalue having unit norm is completely analogous to the bifurcation theory for equilibria with an eigenvalue on the imaginary axis. Periodic orbits become unstable when Floquet multipliers σ i cross the unit circle as the parameter p is changed (remember that the Floquet multipliers depend on the parameter p). There are three important cases.

1.

A real Floquet multiplier is crossing the unit circle σ( p ¯ ) = 1 (saddle-node). This situation can be shown to be topologically equivalent to the one-dimensional discrete dynamical system

(2.16) x n + 1 = P ( x n ) , with P ( x ) = p + x ± x 2 .

Let us consider the supercritical case P(x) = p+ x- x 2and assume that p ¯ = 0 for simplicity. As p becomes positive, two fixed points x * 1and x * 2of the iterative map (Eq. (2.16) ) appear which are solutions of P(x) = x. These two fixed points correspond to the appearance of two new families of periodic orbits. One family is stable (P'(x* 1) < 1) while the other is unstable (P'(x* 2) >1). Like in the case of equilibria, particular constraints may lead to transcritical or pitchfork bifurcations (see Fig.2.6).

Fig. 2.6. Phase space view associated with the saddle-node (left panel) and pitchfork (right panel) bifurcations of periodic orbits.

2.

A real Floquet multiplier is crossing the unit circle with σ( p ¯ ) =-1. This situation is called flip or period-doubling bifurcation and has no equivalent for equilibria.

The system is topologically equivalent to

(2.17) x n + 1 = P ( x n ) , with P ( x ) = ( 1 + p ) x ± x 3 .

This situation corresponds to the pitchfork case for the second iterate P2 map. Again consider (with p ¯ = 0) the supercritical case P ( x ) = ( 1 + p ) x + x 3 . As p becomes positive, two fixed points of the second iterate P2appear which arenot fixed points of the first iterate. This means that another stable periodic orbit of period 2T arises, whereas the original periodic orbit. becomes unstable (seeFig. 2.7). The corresponding trajectories alternate from one side of. to the otheralong the direction of the eigenvector associated with the eigenvalue σ = −1. The periodic orbit is twisted around the original periodic orbit like a Möbiusband. The consequence is that this bifurcation cannot occur in a two-dimensionalsystem since one cannot embed a Möbius band in a two-dimensional manifold.

Fig. 2.7. Phase space view of a period-doubling or flip bifurcation of a periodic orbit (upper panel) and a Neimark-Sacker or torus bifurcation (lower panel).

3.

The final example corresponds to the case of a pair of complex conjugate eigenvalues s, σ , σ ¯ crossing the unit circle such that | σ ( p ¯ ) | = | e i φ | = 1 . This bifurcation is called Neimark–Sacker or torus bifurcation. If one assumes after reduction on a two-dimensional invariant manifold that dσ(p)/dp ≈ 0 at p = p ¯ and σj( p ¯ ) ≈ 1for j = 1, 2, 3, 4, then there is a change of coordinates such that the (Poincaré)map takes the following form in polar coordinates

(2.18) P r ( r , θ ) = r + d ( p p ¯ ) r + a r 3 , P θ ( r , θ ) = θ + φ + b r 2 ,

where a, b, and d are parameters. Provided a≈ 0, this normal form indicates, that a close curve generically bifurcates from the fixed point, this closed curve corresponds to a two-dimensional invariant torus. Note that the strong resonance cases σ j ( p ¯ ) = 1 for j= 1 and j= 2 correspond to the saddle-node and period-doubling bifurcations, and the two other cases j= 3, 4 may lead to the absence of a closed curve or even several invariant curves (see Kuznetsov [1995], pp. 515).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1570865908002032

The Regulation of Gene Expression by Operons and the Local Modeling Framework

Matthew Macauley , ... Robin Davies , in Algebraic and Combinatorial Computational Biology, 2019

4.6.2 TURING: Algorithms for Computation With FDSs

TURING is the newest version of a software package developed by Reinhard Laubenbacher's research group. The earliest version was originally called Discrete Visualizer of Dynamics (DVD) and that was replaced by Analysis of Dynamic Algebraic Models (ADAM). The previous versions actually had more hard-coded features, but people could not contribute. By contrast, TURING is a crowd-sourcing software that has only the bare-bone capabilities built in, but a framework for people to develop algorithms that anyone can use without prior coding knowledge. TURING debuted online in 2017 and is still in its infancy. In this section, we will show how to use TURING to visualize the phase space of a Boolean network, summarize existing algorithms, and discuss how to contribute new features with the hope the interested reader will get involved.

Much like GINsim, TURING has the capability to plot the (synchronous) phase space of a local model. It can be accessed at http://www.discretedynamics.org/. On the website, one should see the following links:

Clicking "Join Us" allows one to create an account, though this is only necessary for those who want to contribute to TURING. The individual who only wants to run the existing algorithms does not gain anything from having an account. There are two ways to contribute to TURING:

(i)

adding new algorithms; and

(ii)

adding new workflows.

The former should be self-explanatory. The latter can be thought of as utilize multiple existing algorithms in sequence, for example, run Algorithm A, and then feed its output into Algorithm B, and so on. As of the writing of this book (early 2018), there are no existing Workflows in TURING, but they are coming.

To get a feel for what it like to run an algorithm in TURING, click either the Algorithms tab or the "New Algorithms" button. That will take you to a page with a list of algorithms and descriptions. Click on the one called Cyclone, which has description Calculate Dynamics of a discrete dynamical system using exhaustive search. All of these algorithms have a "Load sample data" button, and modifying the sample data is a good way to learn how it works.

We will do an explicit example now using the three-node local model from Running Example 4.1. Click "Load sample data" and now modify the code so it describes the model in Eq. (4.7). See the left half of Fig. 4.11 for what this should look like; it should be fairly self-explanatory. Click RUN COMPUTATION, and the synchronous phase space should appear in plain text, as shown on the right half of Fig. 4.11. Click the visualization tab to replace this right window with a .png image of this phase space, which is shown on the left in Fig. 4.12. This image will always be sized to fit the window, so for larger examples, it is best to view this by right-clicking and choosing "Save image as…" and then opening the file directly.

Fig. 4.11

Fig. 4.11. Simulating our three-node running example ( x 2 ¯ , x 1 x 3 , x 2 ¯ ) using the free crowd-sourced software TURING: Algorithms for Computation with Finite Dynamical Systems.

Fig. 4.12

Fig. 4.12. Left: The synchronous phase space of the local model from Running Example 4.1 and Eq. (4.7) rendered with TURING. Right: The menu in TURING that one sees upon creating an account and logging in.

Exercise 4.22

Use TURING to visualize the synchronous phase space of our ara operon local model from Section 4.4.2. Use (A e , A em , Ara , G e ) = (1, 1, 1, 0) for the parameter vector, and then try (A e , A em , Ara , G e ) = (0, 1, 1, 0). It will be helpful to use Macaulay2 to convert the Boolean expressions into polynomials over F 2 , like we did in Section 4.5.

Let us pause to point out several features of the input for the Cyclone algorithm. Note that in line 4 of Fig. 4.11, we called the variables x1 x2 x3, but this is not necessary. We could have instead named them with letters, like we did when we proposed our lac and ara operon models. Also note that the algorithm allows for the variables to have a different number of states (line 5). For example, we could give one variable three states (F 3 = {0, 1, 2}) but make the rest of them binary. Finally, in line 6 (SPEED OF VARIABLES) the algorithm allows the variables to be updated at different rates.

As of the writing of this book in early 2018, TURING has seven algorithms on local models published by contributors and available on the website. The following is a list of them with descriptions, not including Cyclone, which was already discussed.

SDDS. This algorithm analyzes a nondeterministic version of local models called stochastic discrete dynamical systems, introduced in [39]. These are basically local models along with probabilities of the functions being applied at each update step. For more information, read on to Chapter 5.

SDDS Control. This implements control theory algorithms for the aforementioned SDDS that were introduced in [40]. These are also covered in Section 5.5.

Discretize. Given time-series data, such as expression levels or concentrations of gene products, this algorithm discretizes it, so it can fit into the framework of a local model, using a method from [41]. The states can be binary or more refined. An example of where this was used is discussed later in this book (Section 6.5), where time-series data from a gene network in Caenorhabditis elegans was discretized into states from F 7 = { 0 , , 6 } .

BasicRevEng. A way to build, or "reverse-engineer" a local model from only knowing part of its state space. This was developed in [42]; see Chapter 3 for a nice survey and tutorial. This is briefly in Section 6.6.

BN Reduction. This algorithm takes a local model whose phase space is too large to compute, and reduces it by eliminating variables while preserving key features such as its fixed points. This was developed in [43]; see [44, Chapter 6] for a nice survey and tutorial.

Gfan. This algorithm takes in discretized time-series data and reverse-engineers local models using an advanced algebraic object called a Gröbner fan, which in some sense describes all possible Gröbner bases that can arise from the data by varying the term order. This was developed in [45]; see also [9, Section 3.5] for more information.

Users can contribute new algorithms in TURING using a program called AlgoRun. This program was designed to package algorithms in bioinformatics and computational science using so-called virtual containers called dockers. This technology was created by the company Docker, Inc., in 2013 [46], and it eliminates the need for the user to write code, thereby also doing away with common problems such as broken libraries or missing software dependencies. Dockers are portable across platforms and can be an attractive to using virtual machine(s).

Adding an algorithm to TURING requires one to visit https://hub.docker.com/ to download and install Docker on their local machine. The AlgoRun website http://algorun.org/ contains both documentation and examples of how to use AlgoRun to create algorithms for TURING, and then how to publish them to the AlgoRun website.

Another way to contribute to TURING is to add a workflow, or chains of AlgoRun algorithms. This is done with a drag-n-drop visual program called AlgoPiper, where the workflows are called pipelines. AlgoPiper is run on the cloud; see http://algopiper.org/ for documentation and example pipelines. Since AlgoPiper is very new, the only existing pipelines involve algorithms for nucleic acid databases (e.g., DNA and RNA). However, seeing how these were built with AlgoRun algorithms should be useful for those looking to design similar pipelines for local models in TURING. Fig. 4.13 shows what the AlgoPiper interface looks like.

Fig. 4.13

Fig. 4.13. AlgoPiper creates workflows, or pipelines, by stringing together AlgoRun algorithms.

We will conclude this section by describing a theoretical basic algorithm that does not yet exist in TURING, that should be fairly straight-forward to implement. 5 Recall from Section 4.5 that when we analyze our local models of the lac and ara operons, we had to consider all possible parameter vectors (6 for the lac and 12 for the ara operon) and compute the fixed points separately. Similarly, if we want to visual the phase space using GINsim or TURING, we would have to enter these individually. Using TURING, we could create a workflow that allows the capability of the user specifying parameters as inputs, for example, Ae=0; Aem=1; Ara=1; Ge=0; that would get fed into Cyclone before the phase space was created. The functions in Cyclone would be defined using these parameters, and so to create the phase space with a different parameter vector would only require the user to change the input parameters, rather than change the actual functions in plain text, as in the left half of Fig. 4.11. This feature was actually built into ADAM, the prior version of TURING. However, TURING has only the bare-bones features hard-coded, and the crowd-sourcing mission leaves the design of these type of algorithms up to the user to contribute.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128140666000040

Discrete Dynamical Systems, Bifurcations and Chaos in Economics

Wei-Bin. Zhang , in Mathematics in Science and Engineering, 2006

7.5 Growth with different types of economies

This section introduces a growth model, proposed by Galor and Weil, that captures the historical evolution of population, technology, and output. 13 The economy evolves three regimes that have characterized economic development: from a Malthusian regime (where technological progress is slow and population growth prevents any sustained rise in income per capita) into a post-Malthusian regime (where technological progress rises and population growth absorbs only part of output growth) to a modern growth regime (where population growth is reduced and income growth is sustained). The model is defined within the OLG framework with a single good. The production uses land and efficiency unit of labor as inputs. The supply of land is exogenously fixed. The number of efficiency units is endogenous.

The output produced at time t is

Y ( t ) = H α ( t ) ( A ( t ) X ) 1 α ,

where X and H(t) are the quantities of land and efficiency units of labor employed in production at t, 0 < α < 1, and A(t) > 0 is endogenously determined technological level at t. The output per worker at t is

y ( t ) = h α ( t ) x 1 α ( t ) y ( h ( t ) , x ( t ) ) ,

where yh > 0, yx > 0 for any (h, x) ≫ 0

h H N , x A X N ,

where N(t) is the total labor force at t. Suppose that there are no property rights over land and the return to land is thus zero. The wage per efficiency unit of labor is therefore equal to its average product

(7.5.1) w ( t ) = ( x ( t ) h ( t ) ) 1 α w ( x ( t ) , h ( t ) ) .

Each individual born at period t − 1 lives two periods. In the first period, they consume a fraction of their parents' time. In the second period, they allocate their endowed one unit of time between child-rearing and labor force participation. In each period t a generation that consists of N(t) identical individuals joins the labor force. The utility is represented by

u ( t ) = c 1 γ ( t ) [ w ( t + 1 ) n ( t ) h ( t + 1 ) ] γ , c ( t ) c ˜ > 0 ,

where c ˜ is a subsistence level, n(t) is the number of children of individual t, h(t + 1) is the level of human capital of each child, and w(t + 1) is the wage per efficiency unit of labor at time t + 1. The utility function is monotonically increasing and strictly quasi-concave. Let

τ 0 + τ e ( t + 1 )

be the total time for a member of generation t of raising a child with a level of education quality e(t + 1). Define potential income as the amount that generation t would earn if they devoted their entire time endowment to labor force participation. That is, potential income is given by w(t)h(t). This income is divided between child-rearing and working. Hence, in the second period of life, the individual daces the budget constraint

w ( t ) h ( t ) n ( t ) { τ 0 + τ e ( t + 1 ) } + c ( t ) w ( t ) h ( t ) .

It is assumed that the level of human capital of members of generation t, h(t + 1), is an increasing function of their education e(t + 1) and a decreasing function of the rate of progress in the state of technology from period t to t + 1

g ( t + 1 ) A ( t + 1 ) A ( t ) A ( t ) .

That is

h ( t + 1 ) = h ( e ( t + 1 ) , g ( t + 1 ) ) , h , h e , h g g , h e g > 0 , h g , h e e < 0 , ( e , g ) 0.

The erosion effect is assumed to become higher as a result of technological progress, i.e.

y ( h ( t ) , x ( t ) ) g ( t ) > 0.

It is further assumed that

τ 0 h e ( 0 , 0 ) τ h ( 0 , 0 ) < 0.

It is straightforward to show that under this assumption, there exists a value of g ˜ such that

τ 0 h e ( 0 , g ˜ ) τ h ( 0 , g ˜ ) = 0.

Denote z(t) and z ˜ the level of potential income and the level of potential income at which the subsistence constraint is just binding; that is

z ( t ) = h ( t ) w ( t ) , z ˜ = c ˜ 1 γ .

By equation (7.5.1), we have

(7.5.2) z ( t ) = h α ( t ) x 1 α ( t ) = h α ( e ( t ) , g ( t ) ) x 1 α ( t ) z ( e ( t ) , g ( t ) , x ( t ) ) .

Members of generation t choose n(t) and h(t + 1) to maximize their intertemporal utility function subject to the budget constraints. It can be proved that the optimal solution is characterized by the following solution

(7.5.3) n ( t ) = { γ τ 0 + τ e ( g ( t + 1 ) ) if z ( t ) z ˜ , 1 c ˜ / z ( t ) τ 0 + τ e ( g ( t + 1 ) ) , if z ( t ) < z ˜ ,

(7.5.4) { e ( t + 1 ) = 0 , if g ( t + 1 ) g ˜ , e ( t + 1 ) = e ( g ( t + 1 ) ) , if g ( t + 1 ) > g ˜ ,

where e(g) is an implicit function between e and g, i.e.

( τ 0 + τ e ) h e ( e , g ) = τ h ( e , g ) .

It is assumed that e″ < 0 for any g ( t ) > g ˜ .

We have described the behavior of the producers and consumers. We now describe technological change by the following equation 14

(7.5.5) g ( t + 1 ) A ( t + 1 ) A ( t ) A ( t ) = g ( e ( t ) ) , g ( 0 ) , g > 0 , g < 0.

The size of working population at time t + 1 is determined by

(7.5.6) N ( t + 1 ) = n ( t ) N ( t ) ,

where N0 is historically given. Utilizing

x ( t ) = A ( t ) X N ( t ) ,

and equations (7.5.5) and (7.5.6), we have

x ( t + 1 ) = 1 + g ( t + 1 ) n ( t ) x ( t ) .

Substituting equations (7.5.3) and (7.5.5) into the above equation yields

(7.5.7) x ( t + 1 ) = { [ τ 0 + τ e ( g ( e ( t ) ) ) ] [ 1 + g ( e ( t ) ) ] γ x ( t ) , if z ( t ) z ˜ , [ τ 0 + τ e ( g ( e ( t ) ) ) ] [ 1 + g ( e ( t ) ) ] 1 c ˜ / z ( t ) x ( t ) , if z ( t ) < z ˜ .

The construction of the model is thus completed. The system consists of equations (5.5.2)-(7.5.7). In the dynamical analysis, the economy is divided into two regimes: the subsistence regime characterized by z ( t ) z ˜ and modern regime characterized by z ( t ) > z ˜ . Although the analysis is not complicated, it will take a long space to examine. The reader is encouraged to analyze the behavior of model, and then to read the analysis by Galor and Weil.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0076539206800263

Parallelization of a Chaotic Dynamical Systems Analysis Procedure

J.M. McDonough , T. Yang , in Parallel Computational Fluid Dynamics 2001, 2002

2 FORM OF THE SUBGRID-SCALE MODEL

The form of the subgrid-scale model introduced in [3] is

(1) q * = A q ζ q M q ,

where Aq and C q are, respectively, amplitude factors and anisotropy corrections derived from Kolmogorov's (mainly K41) theories (see Frisch [7] for detailed discussions), and the Mq are chaotic maps. In early work (including [3]) these were chosen somewhat arbitrarily although the logistic map was widely used, at least in part due to the observation in [7] that quadratic maps of this nature might be viewed as a "poor man's Navier–Stokes equation." (In fact, the coupled DDSs derived and analyzed in [4] are more deserving of this epithet.) Once the SGS result q* is calculated for each component of the solution vector using (1), it is combined with the corresponding resolved-scale quantity, say q, to form the complete {i.e., large-scale plus small-scale) solution:

(2) q x , t = q ¯ x , t + q * x , t .

Because the small-scale, SGS, behavior is now used to directly augment the resolved scale (in contrast to more classical forms of LES where it is used to construct Reynolds stresses—see. e.g., Domaradzki & Saiki [8]), it is imperative that q* be accurate—at least in a qualitative structural sense, and in turn this implies that the maps Mq must accurately mimic the SGS (high-frequency) temporal behavior. Clearly, the best way to guarantee this is to directly employ experimental data in the model-building process.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444506726500839