__INTRODUCTION__

The relentless advance of technology is producing ever more
complex systems, whose interactions at a system level are becoming harder to
predict either by analysis or past experience. As a result many projects in all
sectors of engineering are coming to fruition with unexpected defects and
performance failures, which are frequently proving very time consuming and
expensive to solve, invariably at a critical point in a contract program. A
recently published survey funded by the DTI claims as much as £300 million is
wasted by British industry by the inadequate use of simulation technology. Of
more than 400 UK sites surveyed, fewer than 10% were making use of simulation
techniques to identify problems. Those that were reported indicated a payback
time in weeks or months rather than years, with significant cost savings.

The rail industry has, by and large, fallen into the same
malaise as the rest of the engineering world, with only a few pockets of
isolated activity during the past two decades. Traditionally large centrally
based mainframe computing systems have provided the resources for simulation in
most organisations, firstly for motor design programs, and then train
performance programs, which evolved as a blend of theoretical and empirical
methods. This combination of theoretical and experimental rules was
programmed, usually in a rather haphazard manner, bottom up and allowed to grow
indefinitely. Today modern computing tools such as databases and expert shells
might be used instead. The result was code that inevitably was very hard to
manage, but in many cases despite incoherent so called spaghetti programs, successful
design tools were produced.

The realisation of these benefits slowly permeated
engineering departments in both industry and academia, and by the early 1970's
students were being encouraged to combine theoretical, simulation and practical
approaches to problem solving. It was around this time that academics were
first devoting efforts to create a universal approach to simulation, thus
creating a new discipline in its own right. The intention was to provide the
framework for simulation across the whole sphere of science and engineering.
The ‘standard’ to emerge was defined by Strauss ^{[1]}
for the modelling of continuous time systems, and its concepts were widely
accepted. Time domain based continuous system simulation language (CSSL) as it
became known was adopted by many organisations with engineering interests and
variants both commercial and non-commercial appeared.

Of the commercial versions a few have attained recognised
status,^{[2]}
such as ACSL and CSMP. The Advanced Continuous Simulation Language (ACSL) is
aimed at the engineer who analyses continuous systems described by linear or
non-linear time dependent differential equations or transfer functions. This
involves extracting system matrix or state space forms from input data, and
proceeding to perform a numerical time integration procedure. This is a
classical approach to simulation that in its most elementary form could be
considered analogous to the nearly extinct analogue computer. State space will
be discussed in more detail later in this paper, but first it is appropriate to
discuss other forms of digital simulation applicable to traction power
electronic systems.

__CIRCUIT
BASED SIMULATION__

Most practising power electronic engineers are primarily
concerned with voltage and current waveforms in non-ideal switching circuits,
generally falling under a rectifier, converter, chopper, or inverter classification.
As a result of waveform analysis suitable component ratings can be specified
and a thermal performance determined given an appropriate duty cycle.
Example of typical circuit topology is shown in fig. 1 ^{[3]}. Associated
with each switching device is a snubber circuit to handle switch-on and
switch-off of the power device. A switch-on snubber limits the rate of rise of
current into a device ensuring that a low on-state volt drop is achieved bfore
reaching full current. The switch-off
snubber provides a short-term shunt for current as a device switches off,
preventing excessive rate of change of voltage.

A simple snubbing arrangement is shown in fig. 2. In practice this is quite lossy with energy being dissipated in circuit resistance. Consequently, many attempts have been made to design regenerative or energy recovery type snubbers where snubber energy is returned to the supply in some measure. This becomes increasingly important at higher voltages and has given rise to many exotic circuits in an attempt to regain all the losses during snubbing, as the example in fig. 3 shows. At this level of complexity a design engineer needs help if he is to rightly judge the balance of system efficiency and thermal loss against other factors such as cost and reliability.

A time domain circuit simulator is the ideal tool in this
instance. This is based on a mesh or network solution, requiring the problem
to be specified as a set of nodes and branches. If the components were all
passive and time independent, then by equating voltages across branches, and
summing currents into nodes, a set of linear differential equations would be
formed. Subsequent integration then yields the desired voltage and current
waveforms. The basic network analysis rules are:

Σ v_{n} = 0
loop equation

Σ i_{n} = 0
node equation

This simple approach is not appropriate for the snubbing
example considered above because it has non-linear elements (diodes) and
switching power devices, which may require modelling as something more than
ideal switches. The ability to handle piecewise linear and instantaneous
switching discontinuities, often referred to as a discrete state events (DSE),
increases the mathematical complexity considerably, and at this point use of
computer simulation to solve the problem becomes mandatory.

A number of approaches may be taken depending on the method
of handling the DSE. One is to adopt a linearised sampled data procedure ^{[4]}
to determine a generalised model. This is useful for performing a control
system analysis, but not for circuit simulation where causality is less clear.
Indeed it may be argued that if a circuit has indistinct inputs or outputs
(e.g. a supply rail), it is not causal in the same sense that a transfer
function is. The consequence is two extra problems to consider during
numerical integration. Firstly, the event time has to be detected and secondly
as a result accurate integration in the vicinity of the event has to be
guaranteed. The detection of a DSE is an iterative process occurring within
the numerical integration, and if a large number of DSEs occur during a
simulation run the computing time will increase significantly.

The ability of a circuit simulation to perform state events efficiently and
accurately is a good measure of quality. If writing a circuit simulation
program for a specific task it is often possible to take account of
characteristics of the task. For example a dedicated program may be written on
the assumption that the waveforms to be studied have transients much slower
than the sample period, or that a threshold tolerance would in practice be
wider than the error introduced by one sample period. This program may give
perfectly acceptable results in this instance, but on a different problem where
the assumptions would not hold, misleading waveforms would result. Rules for
the development of circuit simulators are:

·
Specify the
important parameters of the proposed simulation

·
Decide if
special handling of DSEs is necessary

·
Decide if
existing component models are sufficiently accurate

·
Choose
suitable integration method (discussed further in a later section)

·
Determine
integration step length for this model

·
Constantly
validate model by reference to simpler problems, and/or by comparison with a
known design

Given the spread of modern components from integrated circuits
to FPGAs on the one hand, and voltage controlled capacitors to IGBTs on the
other, it is not realistic to expect a hardware circuit designer to build his
own circuit simulation. As a consequence computer based tools to fulfil this
task have come to the fore. The oldest and most established of these is SPICE,
a family of circuit simulators derived from original development at the University
of California, Berkeley.^{[5]}
SPICE is an acronym for “Simulation Program with Integrated Circuit
Emphasis” developed from the early 1970's. The aims of the package at
version 3f5, which is available for workstations and PCs, are to:

·
confirm
circuit ideas before building

·
experiment
with existing designs

·
simulate
measurements of unobservable quantities

Such models are excellent at ac and dc analysis, transient and steady state. Analytic tools allow time or frequency responses to be plotted along with spectral analysis, noise and distortion analysis. Also included is a limited capability to perform feedback control analysis. However, this is done by modelling generalised transfer function block diagrams as equivalent circuits, which while feasible becomes very tedious if the transfer function is either high order or non-linear. For this reason circuit simulators should not be seen as effective tools for control systems development.

Various competitors to SPICE have appeared on the market during the past decade, although nearly all claim to have SPICE compatibility in terms of circuit definition, description and component models. A very up market alternative is SABER with a sophisticated user interface for operation on workstations. SABER's features include component behavioural modelling where functional or mathematical representations may be used to relate points in a circuit. With such a simulator it is possible to input model templates in standard high level languages (FORTRAN, C and PASCAL) as well as its own C-like simulation language. This provides a means of generating traction specific models such as power circuits, motors, drive transmissions and circuit breakers that may incorporate standard library components. Hence, the generation of black box models defined by input/output signals, with many levels of hierarchy is possible. This is suited to the simulation of large systems such as a steady state model of a traction system. Much is made of the approximately linear increase in execution time of SABER with circuit size, compared to the exponential increase using a SPICE algorithm. This is because more advanced mathematical techniques may be deployed, including multi-rate integration, optimised DSE handling and sparse matrix transform methods. Despite all these innovations, circuit simulators are not yet exceptionally good at executing system studies, and are relatively slow compared to bespoke system simulation targeted at a specialised task.

By way of example of a problem well suited to circuit
simulation consider a non-linear power transmission line. The model includes a
25kV single phase power supply producing a 50Hz fundamental and 2% of 3^{rd},
5^{th}, 7^{th} and 25^{th} harmonics, the feeder
substation transformer and a transmission line. In order to include skin and
core loss effects of the transformer at harmonic frequencies, the resistance
term of the transformer was represented by a cubic frequency dependent
expression. The transmission line was made up of π sections, and each π section has a linear resistance variation with frequency to
represent the skin effect of the conductor. The resulting equivalent circuit
model is shown in fig. 4 with an inductive load added for simplicity.

The simulated results of the source and load waveforms are shown
in figures 5 and 6. It can be seen the small 25^{th} harmonic in the
supply voltage is magnified at the load. This is the result of transmission
line resonance, which in a weak power system can exist at the locomotive transformer
primary terminals. This type of model can clearly be of use in designing and
specifying both transformer and harmonic filters required in an electric
locomotive.

__
OTHER BENEFITS OF CIRCUIT SIMULATION__

Before leaving the subject of circuit modelling it is worth
considering one or two spin-offs from circuit simulation. The ability to run
simulations time and again can provide important statistical information with
regard to performance spreads due to component tolerance, temperature and
noise. All of these factors are amenable to statistical analysis and a
simulation can be the ideal tool for an engineer to explore the consequences of
these factors on his design.

Two related approaches can be adopted to produce spread
effects on a simulation. The first is worst case analysis where circuit
parameters at the extremes of their range can be examined to determine the
effect on operation. This is a conventional application of modelling, where
certain components or parameters are examined with a view to finding an optimum
value. If, however, the parameter variations are statistical in nature then a
Monte-Carlo analysis is more applicable to establish the spread of the designs
output. A normal or cumulative graph of computed performance can be displayed
against the specification to evaluate the distribution of each circuit sample
in the Monte-Carlo. Hence, an overall yield for a circuit can be ascertained,
as well as yield confidence limits.

In some instances it may be discovered that the
distributions are biased causing yield reduction. If a design modification can
be applied to take out the bias, the process can be repeated with the
expectation of increased yield. This process is referred to as design centring.
The method can be summarised as:

·
Identifying
the circuit parameters to be designed

·
Applying
tolerance and other statistical bounds

·
Perform
Monte-Carlo run and analyse

·
Compute a
design centring step (if required) and iterate

__INTEGRATION
METHODS__

Later examples will consider a systems oriented approach to
simulating traction propulsion. However, this paper would be incomplete if mention
was not made of the variety of numerical methods used to integrate sets of
differential equations. These algorithms are fundamental to the solution of all
time domain simulators, whether circuit or control system biased. Most traction
equipment is a mixture of continuous and discrete models. Discrete sampled data
by their very nature can be easily computed as algebraic equations from one
step to the next. On the other hand continuous systems models, such as a motor
model, require approximate numerical solutions.

The integration problem is really a trajectory problem based
on a set of initial values. However, to represent this on a digital computer
the dependent and independent differential variables, dX and dt are written as
finite steps, ΔX and Δt. Hence, for a step size of Δt
the continuous differential equation has been transformed into an approximate
algebraic equation. As the step size gets smaller the approximation to the modelled
differential equation becomes better. Thus, the digital simulation is a model
of continuous differential equations, which are themselves models of the
physical system. The method just described is that due to EULER, which symbolically
may be written,

X_{k+1}
= X_{k} + Δt.F(k)
where F(k) is the sampled value of the general input function F(X,t).

This process is equivalent to fitting a series of rectangles
under the system function curve. It should be noted the Taylor expansion for
sample k+1 gives,

X_{k+1}
= X_{k} + Δt.F(k) + Δt^{2}F'(k)/2! + ...

and this provides a first approximation difference of Δt^{2}F'(k)/2.
This is referred to as the local
truncation error. Euler's method requires a short step length for reasonable
accuracy, but more importantly care must be exercised when using it because of
a tendency to numerical instability with higher order ill conditioned equation
sets. This instability manifests itself as the classic Nyquist form where the
output variables' magnitude grows exponentially while alternating in sign each
sample. However, it turns out that Euler is very good around a DSE because it
is not based on any prediction,^{[6]}
so combined with its simplicity it is still a powerful method.

RUNGE KUTTA methods are a development of Euler, combining
information from a number of steps. The purpose is to introduce higher order
terms as found in the Taylor expansion. Second order Runge Kutta (or improved
Euler) is a powerful way of extending simple Euler to better accuracy. In this
the Euler output for the next step, X^{*}_{k+1}, is used as an
estimate to provide a more accurate input function value:-

X_{k+1} = X_{k} + Δt.{F^{*}(k+1) + F(k)}/2

The second order form can be extended to fourth by adding
further auxiliary equations. Runge Kutta 4, as it is known, is numerically robust
for non-stiff equation sets, although not very fast. It is a strong method for
solving systems with derivative discontinuities such as piecewise linear
functions.

Other methods employ variable step length to improve the
speed of integration. Whether this is acceptable to a system simulation
depends on the hierarchical nature of the problem and the type of sample
rates in the digital part of the system. The variable step length KUTTA MERSON
is usually more accurate than Runge Kutta 4, although this may rarely be of
much value as Runge Kutta 4 generally provides more than enough accuracy.
Another variable step length method is the ADAMS BASHFORTH predictor
corrector. Unlike the previous techniques higher order derivatives of F(X,t)
are used in a polynomial fit. It is fast, but functions are required to be
differentiable several times to ensure reliable operation. This is not often
the case when modelling power electronic systems, but the method has been mentioned
for completeness. An idea of the relative trade-off between accuracy and speed
for a second order system is shown in fig. 7. It is worth noting that if the
step length is very short then all the algorithms seem to be converging to a
similar accuracy.^{[7]}
However, this is misleading because the degradation of higher order methods is
due to numerical quantisation effects at very high accuracy. For this reason,
when very short step lengths are required to find a DSE precisely, then Euler
integration can be as effective as other techniques.

The choice of integration algorithm is therefore highly dependent
on the condition of the equations to be solved, and often in hierarchical systems
more than one method may be in use at any point in time. A problem is well
conditioned if changes in input data cause correspondingly small changes in
the solution. The converse is true for an ill-conditioned problem. The
amplification effect between input and output is sometimes referred to as the
condition number. In matrix terms if we consider an output equation Y=CX, then
its condition number is det[C].det[C^{-1}]. It is often possible to
observe an ill-conditioned matrix by looking for a high degree of columnar
linear dependency. Such observations are very important when first formulating
a model, not only in deciding an integration algorithm, but determining the
accuracy of initial conditions that represent the starting point of the
integration and its ability to converge. Both problem and integration algorithm
can be considered numerically stable if the end result obtained is the same (or
very similar) when started from perturbed positions. If the problem is
stochastic (rather than purely deterministic), as some advanced traction
propulsion simulations are, then the same end mean and variance should be expected.

The basic rules of integration algorithm choice can be
summarised as follows:

·
Produce
hierarchy diagram of model to simulate

·
Estimate
(guess) the condition of each block of your hierarchy

·
Consider the
impact of DSEs on system condition and choose step length correspondingly

·
Determine what
accuracy is required

·
Select
algorithm that gives best accuracy/computation time trade-off, taking account
of CPU capability and availability as necessary

__SYSTEM
SIMULATION__

So far this paper has addressed predominantly time domain
simulation methods. If frequency information is required then the circuit can
be simulated over a range of ac input frequencies to obtain frequency responses
and phase plots. If it is desired to find the spectral content of a waveform
then a Fast Fourier Transform (FFT) may be applied. However, when dealing with
system simulations, a totally frequency domain approach is feasible. This would
normally be based on the concept of `steady state' solution, with signals modelled
as summations of one or more harmonic series. The harmonic voltages and
currents can then be injected into networks that can account for frequency
dependent effects readily. For a full discussion of these methods the reader is
referred to Arrillaga et al.^{[8]}
It is also possible to envisage a model from substation to vehicle speed and
distance, based on a steady state approach, which could be progressed through
relatively coarse time steps to determine operational effects.

For a traction propulsion system it is desirable to consider
the effect of the railway networks interface on the behaviour of traction
control. This means it is necessary to look beyond the power electronic system
at the centre of the propulsion equipment, to see how it is affected by the
power transmission system feeding it and the rails on which it runs. Having
discussed an example of power supply modelling earlier in the paper, the
wheel-rail interface is considered here.

The process of establishing a model for a wheel-rail
adhesion characteristic is rather different from that of electrical power
systems. A characteristic is evolved by data assimilation from as many sources
as possible^{[9]}^{,[10],[11]},
to reduce the risk of biased data due to erroneous measurement. Examples of the
type of characteristics that result are given in fig. 8. Clearly the curves are
not linear, and it turns out that a combined polynomial and piecewise linear
approach is most effective. The initial steep positive slope is quite linear
and a straight-line representation is realistic. The rest of the characteristic
is represented by a polynomial fit of 4^{th} to 8^{th} order
depending on the rail condition type. If an attempt is made to match the
gradient at the origin to the starting gradient of the polynomial an almost
seamless transition is effected.

This work was conducted on Brush Traction's Class 60 freight
locomotive project for two major reasons. The first was the requirement to establish
with confidence an axle diameter that was adequate and not over designed, but
could withstand torsional oscillations expected when the peak of the adhesion
curve was reached. The second and more difficult task was to incorporate the
model developed in the course of the first task into a system model that included
a speed measuring radar, main alternator supply and control, and traction
motors with separate field controls. The purpose of doing this was to develop a
control method of keeping the wheels creeping at the peak of available adhesion
when demanded tractive effort could not be achieved due to the rail conditions,
loading or gradient.

The first task was to model the transmission, wheel and axle
assembly. Initially a Continuous System Modelling Program was used to build a
spring-mass-damper model of the transmission and wheelset. The
spring-mass-damper is the elementary building block of a traction mechanical
drive and can be easily represented in a block diagram by using Laplace
operator notation (see fig. 9). J is the mass, or inertia in this
instance, as rotational quantities are being considered. K is the spring or
shaft stiffness, and D is the damper rate. In many cases a physical damper
device is not present, but this term should still be included to model the
inherent damping of the bush or shaft, although in the case of a shaft D is a
very small term. F_{v} is the viscous friction term from the bearings.
If the output is driving an infinite inertia then the load positional feedback
will be constant, but normally this would be derived from the next stage.

More complex mechanical models can be derived from
combinations of spring-mass-damper blocks that bolt together easily even in a
high-level language environment. The reaction torques or forces appear as
feedback paths in a block diagram. In this way it is possible to model not only
the transmission drive components, but also wheelset and bogie, longitudinal
and vertical modes, track forces and disturbance inputs from the track such as
rail joints. When this more complex mechanics model is fitted into a system
simulation it becomes possible to evaluate the effect of wheel-rail forces at
the motor, in the power conversion equipment, and even back at the supply.

Reverting to the problem of torsional oscillations, these are present to some degree in any powered wheelset once peak available adhesion has been reached. Where the wheel/rail interface characteristic slope is positive then extra damping proportional to the slope is introduced to the wheelset angular motion. When this damping term is not present, or if it is only present in a small measure on one of the two wheels, then axle torsional oscillations may propagate. A simplified dynamic model of a wheelset is shown in fig. 10. It consists of two polar inertias, joined by a torsional spring representing the axle. A third inertia is then joined to the system by means of a transmission gearing and another torsional spring. This represents the effect of the motor armature inertia, motor shaft and gearbox. The input torque applied to the motor shaft is the electromagnetic torque from the motor. The system has two unstable torsional modes:

·
A single node
on the axle shaft section, with the outer inertias (armature and left wheel)
oscillating in anti-phase.

·
Two nodes, one
on each shaft section, with the outer inertias oscillating in phase.

The simulation shows the build up of torsional oscillations
with both frequencies clearly visible on PU creep, the lower graph of fig. 11.
With this relatively simple system it was possible to find the characteristic
equation from det[sI-A]=0, and then find its zeros. Matrix A is the system or
plant matrix, and [sI-A]^{-1} is known as the frequency domain form of
the transition matrix. The roots of s in the characteristic equation are
the system poles, or eigenvalues of matrix A. The related vectors obtained by
substituting the roots into [sI-A]=[0] are known as the eigenvectors. Two pairs
of positive complex conjugate roots were found, where the modulus of the
imaginary component yields the frequencies of oscillation as 53 and 108Hz.
Thus, the time domain simulation frequencies were confirmed. The results at
30% full speed and 5% creepage show build up over a 1 second period, whereupon
the amplitude is limited because the oscillation is trying to extend to the
positive slope region, a well damped section of the adhesion characteristic.
This phenomenon provides enough positive damping to balance the negative
damping which triggers the oscillations, and hence gives rise to a bounded
instability, of 10% pu creep peak to peak in this case. It would be exceedingly
difficult to predict this theoretically because the limiting amplitude is
highly dependent on the non-linearity of the wheel/rail interface curve, but
simulation provides the answer once again. The maximum principal stress level
is arrived at by considering the worst case bending stresses in the axle due to
loading (this produces the dominant 4Hz component in the top graph of fig. 11),
and combining this with the shear stress component due to the torsional oscillation.
It can be observed from the graph that the torsionals only increase the peak
stress level by 20% in this instance, but this is highly dependent on the
dimensions of the axle as well as loading conditions.

For the second part of the class 60 modelling exercise Cecube was commissioned
to build a full system model of the locomotive with the purpose of designing an
effective peak tracking adhesion control scheme when the locomotive was creeping.
The system comprised of six parallel DC armatures with individual field
control, a diesel engine driving a rectified alternator supply, the axle hung
drive system discussed above with axle speed probes, and a Doppler radar to
measure loco speed independently. A schematic diagram of the control system is
given in fig. 12. Since the system is comprised of many quite complex
subsystems a hierarchical approach was adopted to generating the simulation.
This involved writing a code block for each component of the system, debugging
and testing it in isolation before attempting to assemble it into the full
system simulation. The process of combining several components into a system
model gives rise to two immediate problems:

·
What method of
connection or interfacing is appropriate?

·
How to handle
the convergence difficulties that may result in a high order system?

The simplest interface is via a global or common variable
that is updated with a short step length. If the whole of the simulation can be
run with a short step length, this presents no problems. Alternatively, it may
be possible to separate out the interface variables from the state variables
that derive them from inter-sample interpolation, which results in a
multi-rate sampling simulation. With sampled data it is straightforward to
output a variable to another component of the simulation, without making any
assumptions about the piecewise linear nature of the data. This is particularly
applicable when modelling control algorithms that include relatively slow real
time sample periods. When inputting data to a sampled data module, such as a
current or voltage measurement, a little more care is required to ensure that
the sampled data measurements have the correct age. In many control systems a
data measurement may be used more than once during a sampling period, so the
data will be older on the later occasions. A simulation must endeavour to use
data sampled at the correct point in time, and not simply the most recently
computed value, which would not be available in real time.

Another means of subsystem connectivity is the method of
tear, also referred to as diakoptics,^{[12]}
which is applicable to electrical network problems. Here each component
subsystem drives a voltage source representing the loading network. The loading
network is fed from a separate voltage source representing the driving network.
The two subsystems are connected together by an additional block referred to as
the link network. This comprises a current source driving an inductance (see
fig. 13). The current source represents the difference between the output
current of the driving network and the input current of the loading network. By
calculating the mesh equations of the torn networks and the connectivity
equation of the link network, a general solution is found for the actual
current flowing between the torn networks. This is achieved by taking the
limit of the link network, so the current source tends to zero and the
inductance tends to infinity. Because these values do not have to be
explicitly computed in the simulation this does not present a problem. Having obtained
a value for the actual current flowing between the networks this can be used as
output/input data to the respective subsystem networks, which continue to be computed
separately.

Although a system simulation is usually of very high total
order (for instance the example in fig. 12 is over one hundred), if properly
broken down into hierarchical subsystems each component should be quite
convergent in its own right. By restricting the connectivity between layers of
a hierarchy at the start of the simulation run, it is possible to get the
lowest level converged locally before interfacing it to the layer above. This
makes for a most effective way of progressive convergence of system
simulations. Normally, this produces convergence over a relatively short period
of simulated time, typically less than half a second. On the rare occasions
when convergence is very slow or represents a substantial proportion of the
overall run time it may be possible to improve matters by relaxing the
integration accuracy or by increasing the step length. The ability to achieve
this will depend on the conditioning of the equations, which will vary from
case to case.

The creep control simulation included geometric functions to
account for the movement of radar, both longitudinal and pitching, due to
motion of the vehicle relative to the axles. To produce a realistic simulation
of adhesion it was found necessary to build a stochastic model of the
wheel/rail interface as a function of loco speed. This again was performed by
assimilating data, this time from Class 60 test measurements on low adhesion
track conditions. As a result the simulation was made both accurate and
representative of typical conditions. Using this base a novel creep control
algorithm was developed to maintain operation at or very close to the adhesion
curve peak when required, even under dynamic conditions of changing adhesion
levels.

__SYSTEM
EQUATIONS__

To model any subsystem of a simulation it is first necessary
to write down equations that describe the subsystem. These can be categorised
as follows:

I.
Sets of
related differential equations

II.
Algebraic or
transcendental equations

III.
Polynomial
functions

IV.
System
constraints and limits

Conventionally state space methods are employed to describe
the differential elements. In state space the system is characterised by a set
of first order differential equations made up of `state' variables. System
analysis and design is accomplished by solving a set of first order equations,
rather than a single higher order equation. Consequently, the states need not
be accessible, observable, or even have a physical realisation. This approach
often simplifies a problem, particularly if the system is linear, and has
several advantages when using a digital computer for solution.

In the case of linear systems the equations can be expressed
in matrix form,

X´ = AX + BU

Y = CX + DU

where U is the input vector, B and D are input or driving matrices, A is the system matrix discussed earlier, and C is the output matrix relating the states to the output vector of interest. Consider the example of a pair of second order systems represented in analogue form by state variable diagrams in fig. 14. In both examples the D matrix is null, as is often the case in practice. In the top example the state equations can be written down by inspection,

**.**

x_{1}= a_{1}.x_{1} + x_{2} ...1

**.**

x_{2} = a_{2}.x_{2} + u_{ } ...2

From this, the state space form becomes,

By substituting for x_{2} in
equation 2 from equation 1 the system equations are derived.

Hence,

**..
.**

x_{1} - (a_{1}+a_{2})x_{1} +
a_{1}a_{2}.x_{1} = u_{ }

and after
substituting for the output variable, y, this gives the characteristic equation

**..
.**

y_{1} - (a_{1}+a_{2})y_{1} +
a_{1}a_{2}.y_{1} = 0_{ }

Now consider the bottom second order system in fig. 14. The state equations are independent, and can again be written down by inspection,

** .**

x_{1 }= s_{1}.x_{1} + u ...3

**.**

x_{2}
= s_{2}.x_{2} + u ...4

The state space form is,

The output matrix coefficients are,

c_{1}
= -c_{2} = 1/(s_{1}-s_{2})

by definition for this particular system.

However, the solution to the two independent state equations, 3 and 4, by Laplace Transform is trivial, and can be written as,

(s - s_{1}).x_{1}
= (s - s_{2}).x_{2} = u

Hence, y = ____1_____ _{. }_{{}____u____ _ ____u_____{}}

s_{1} – s_{2} s – s_{1} s – s_{2}

= u / {s^{2}-(s_{1}+s_{2})s+s_{1}s_{2}}

By applying an inverse Laplace Transform the time domain
solution is determined as,

**..
.**

y - (s_{1}+s_{2})y + s_{1}s_{2}.y
= 0_{
}

Notice by choosing s_{1}=a_{1} and s_{2}=a_{2},
the systems are identical.

The separation, or independence, of the state variables in the
latter case is a useful technique because it creates a diagonal system matrix.
This diagonal variation is known as the canonical form. As has been seen it
yields the system roots or modes very easily. Although with these simple second
order systems the canonical form offers little advantage, when higher order
systems are considered the canonical form can be easier and quicker to
integrate. To effect a canonical transform on a set of system equations a
generalised approach is necessary to create a new set of state variables. The
derivation and use of this transform method can be found in several reference
texts. ^{[13]}^{,[14]}

__FURTHER
EXAMPLES OF SYSTEM SIMULATIONS__

From the preceding section it is apparent that tools for handling
and manipulating matrices could be beneficial to simulation design. One possibility
is that the designer might develop bespoke high-level language tools, in C or
FORTRAN, specifically for an application. This is the approach to system
simulation considered thus far. The disadvantage is that the designer requires
a fair degree of programming skill, and the importance of validation is
increased because the simulation is subject to the possibility of both
modelling and programming mistakes. Furthermore, packages or suites of software
cannot be developed rapidly from scratch.

An alternative approach is to design simulations in a
general purpose computer environment that operates at a level higher than a
programming language, and is able to perform the required matrix operations
explicitly. MATLAB (MATrix LABoratory) is such a tool. It was originally
written in FORTRAN for mainframe computers in the late 1970's, and makes use of
a kernel of LINPACK^{[15]}
and EISPACK,^{[16]}
developed at Stanford University. MATLAB is an interactive system whose
main data type is an automatically dimensioned matrix. Problem solutions are
expressed in a manner similar to written mathematics. In academia it is a
standard educational and research tool, and has gained ground throughout industry.
MATLAB is particularly strong in the areas of control theory and digital
signal processing, although toolbox expansions allow it to be applied to
various disciplines. Now written entirely in C, versions of MATLAB exist for
all common place computer systems, including IBM PC's, Macintosh and SUN
workstations, and includes non-linear analysis, programmable macros, full
IEEE floating point arithmetic, and sophisticated post-processing graphics for
data examination.

The best way to gain an impression of its capabilities is
to view the range of commands available in MATLAB. Other fully integrated
environments have become available over time. Most importantly these all have
a graphical user interface allowing the system being simulated to be drawn on
a computer screen schematic, making visualisation and editing of subsystem
interconnections much easier. Such environments operate either by
interpreting the source or by a compiled mode if execution time is long. Other
features provide for hierarchical modelling with user code blocks, the
generation of algorithm code for real-time applications, and the production
of model paradigms from knowledge bases. What the matrix based packages
are less good at is circuit simulation, because they have evolved from a
systems based mentality, rather than network theory.

An example of this type of graphical user interface and
block modelling is shown in fig. 15. This particular model is that of a DC
motor, which could be part of a system hierarchy model. It has armature voltage
input at label 3 (left side block), and two other inputs of field voltage (labelled
1 on right side) and speed (labelled 2). The single output in this case is the
motor electromagnetic torque. The model includes armature and field absolute
current limits, flux multiplication blocks, and algebraic or polynomial functions
to model armature reaction, flux hysteresis and field saturation. By making
use of non-linear components a motor model suitable for control behaviour
analysis under any operational conditions has been formed.

This motor model was used in a chopper fed system simulation
to predict harmonic current level in the supply. The requirement was to keep
current in the frequency range 360Hz and 430Hz to below 50mA. The blending of
regenerative and rheostatic brake was considered a potential hazard as this
could introduce frequencies that were not just integer multiples of the
chopping frequency, and some of these may fall in the signalling bands. By
carefully controlling the source impedance the model was operated in a mode
where the drive would repeatedly switch between regenerative and rheostatic.
One solution might have been to force the drive to remain in a mode for a
minimum period, say 0.5s, so that resulting 1Hz cycling would not cause interference.
However, this would give less accurate control, may even effect passenger
comfort, and most importantly could result in electric brake being switched
off because rheostatic brake was required during low receptivity, but was not
available because of the mode minimum time. A far better solution is chopper
cycle by cycle blending provided it cannot cause interference.

The simulation was able to demonstrate that under all feasible conditions of receptivity the signalling levels were not exceeded. Some of these conditions could not be easily arranged on a railway, so measurement in all cases was not possible. The simulation effectively provided answers for these instances. Needless to say validation of the model to the satisfaction of the railway operator was imperative for this purpose. This was achieved by comparison of measurement and simulation in as many modes of operation as practical over a considerable period of time.

__GETTING
STARTED__

Not surprisingly the sophisticated integrated simulation
environments discussed in the last section are relatively expensive. Assuming
such tools are not available and you do not have the programming experience,
the time, or perhaps the software to write your own simulations, there is a way
of modelling some problems. It just requires a modest specification PC and a
standard spreadsheet package, as used prolifically for data recording,
business graphics and accounting.

A spreadsheet can perform arithmetic and simple functional
operations on rows and columns, or on individual elements. Thus it provides a
convenient way with relatively easy syntax to perform the type of operations
fundamental to all methods of simulation discussed in this paper. The ease of
the graphical user interface and the high quality graphical output make
spreadsheets an ideal introduction for the novice. A number of authors have
previously written papers on this subject^{[17]}^{,[18]}
and the Open University^{[19]}
successfully introduced spreadsheets as a means of teaching both subject matter
and modelling simultaneously.

Finally, activity continues to abound in the academic
environment of simulation theory and language. One area of research that has influenced
railway systems simulation development in the last few years is behavioural
modelling and object oriented structures. Research into and applications of a
universal modelling language^{[20]}
(UML) offer a cross-discipline specification for modelling processes. The
emphasis is on declarative rather than FORTRAN like procedural forms. The
effect should be to create reusable software equally suited to various
engineering tasks by merely defining an instance of a particular object.
Benefits should include the speeding up of both simulation development and execution.

__BIBLIOGRAPHY__

^{4} A
General Approach to Sampled Data Modelling for Power Converter Systems, G.C. Verghese,
M.E. Elbuluk and J.G. Kassakian, IEEE Trans PE-1, 1986.

^{5} A
Computer Program to Simulate Semiconductor Circuits, L Nagel, Memorandum No.
M520, University of California, 1975.

^{6} Simulation
of State-Events in Power Electronic Devices, P.P.J. van den Bosch and H.R.
Visser, IEE, Conf. Pub. 324, 1990.

^{8} Power
System Harmonics, J. Arrillaga, D.A. Bradley and P.S. Bodger, John Wiley and
Sons, London, 1985.

^{9} Adhesion/Slip
Characteristics in Traction, D.S. Armstrong, BR Research Technical Memorandum
TM TBC015, Dec 1988.

^{10} Locomotive
Friction - Creep Studies, C.F. Logston Jr and G.S. Itami, ASME Paper No.
80-RT-1, 1980.

^{11} Dutch
Railways Three Phase Electric Traction Test Vehicle -Adhesion Measurements, Electrische
Bahnen, 329-338, 1979.

^{16} Matrix
Eigensystem Routines - EISPACK Guide, B.T. Smith, J.M. Boyle, J.J. Dongarra, B.S.
Garbow, Y. Ikebe, V.C. Klema and C.B. Moller, Springler-Verlag, 1976.