Introduction

I will present you in this pages some simple examples of dynamic systems. A dynamic system is a system of function where : 

First example : node saddle bifurcation

The phase space is equal to 1
The control space is equal to 2

To study this system, it must be discus of its equilibrium.

It is plotted here the function

For the different values of the parameters we have:

                       

  

trajectories

For and for example we have this scheme

   

It is notable that a trajectory caní't cross itself, so, a node is composed by two trajectories. The possible trajectories are :

The point x1,
the point x2,
]x1,x2[,
]- oo , x2 [ ,
]x1, +oo[

Stability of the different equilibrium

To study the stability of the different equilibrium, it is disturb : x(t)= x1 + u(t)

So, we have to cases

stable

unstable

 If we come back to the function parameters we can easily see that one part of the curve is stable and the other is unstable:

     

Second example : Fork bifurcation

The phase space is equal to 1

The control space is equal to 2

To study this system, it must be discus of its equilibrium.

It is plotted here the function

For the different values of the parameters we have:

   

 

 

Stability of the different Equilibrium

Plotting the derived of the function f it is easy to find the stable and unstable equilibrium of the function

     

Third example : Hopf bifurcation

The phase space is equal to 2

The control space is equal to 4

The calculation is made in the polar space

All calculation made, it can be draw to cases

first case  : 

     

In a 3-D space:

It's the supercritical Hopf bifurcation

Second case :   

     

In a 3-D space:

Undercritical Hopf bifurcation