# Math Modelling

Added: (Mon Aug 20 2001)

Pressbox (Press Release) -
Mathematical Modelling 1980 - 2001

For details of my current projects, please go to http://www.inverse-problems.com/projects

The graphics referred to below can also be found there.

Introduction

I have written this history of my mathematical modelling experience, partly in

response to academic researchers interested in techniques used and assumptions made,

but also to potential employers wanting more background on what I have actually done.

Firstly then, let me make it clear I have not spent all the last 21 years doing only

mathematical modelling. I am a professional computer programmer, and have done

many other tasks, including database work, networking, code conversion, system

administration, typesetting, and sometimes just plain ordinary data processing and

even keying in. But out of all these, mathematical modelling is the one area where

I am definitely most usefully employed, and get the most satisfaction from.

I couldn't possibly detail here what I've done every single day, week, or even month

of my career, but in mathematical modelling terms, here's a brief history:-

1980-1984: Almost all this time was mathematical modelling, with a little

user support and graphic development, (NNC, Marconi, Plessey).

Some of the programs used here I developed from scratch.

1985-1989: Involvement in mathematical modelling, but mostly in a debugging

role. (contracts for Shell, BP etc).

1990-1994: Only a little mathematical modelling, this time for 'Digital' applications,

but most of my time spent doing database and graphics development.

1995-2001: No mathematical modelling done professionally, but in 1998,

started D.Phil (part-time) in Inverse Problems. This led to my

current mathematical modelling project - scanners. Go to

http://www.inverse-problems.com/projects for more on that.

Professionally, my programming work has been with databases,

system administration (Unix), networking (Lanman), encryption

management, and most recently, secure mailing..

Now let me go into the general principles of mathematical modelling that I've

learnt from these years, and then I'll go more into the above history. The gif files

accompanying this document will illusrate this text, and will also, I hope,

prove to potential employers that I really "was there, and did that" !

General Principles of Mathematical Modelling

1) Understand the problem. It is always helpful to write a small program

yourself that simulates it, even if there are bags of existing codes. This

can give a valuable insight into the modelling process, and also what sort

of computing problems you can expect. This is the stage where 'Personal'

attributes are needed! You might have to talk to scientists, engineers,

economists - all sorts, and you'll have to talk in their language. From all

this, you can begin to construct the mathematical model, or understand

the existing one. If you are constructing a new model, don't be surprised

if you can't find any mathematical representation of the problem you're

looking at. The thing to remember is that textbooks and papers tend to

stick to simple, special cases, because they're easier to get the theory over.

On the other hand, you might find a model with some similarities to

yours, but just different enough to be unusable. What's happened here is

someone has had the same problem as you! He/She has then developed

a model for their special case, but they haven't generalized it, due to lack

of time. N.B. a Graphics display here is well worth the effort in setting up,

this can give valuable insight, even if the display does not form part of

the final report. Try to use all data in as 'pure' a form as possible. If you are

working at the theory, away from the physical measurements,

this can be so easy to forget! As a crude example, let's say a

voltmeter (old-fashioned, not CRO!) actually measures amps,

then converts to volts. At some point in your theory, you decide the

algorithm can work in amps, so you convert the volts back to amps.

However, it may be that amps are actually available directly from the

sensor, via another socket, so you would be best using that directly,

rather than having two unnecessary calculations: one made by the sensor,

one by your algorithm. Unnecessary calculations can cause more

errors, of course. It may be that you are in the lucky position of

dictating what the sensors are, and even how they are made, but, if not,

this is a point well worth bearing in mind.

2) Establish if programs exist already. This is where you have to be a

professional. You might be longing to write some wonderful code that will do

the job, but if there are programs there already, then you HAVE to look at them!!

If there are programs there, then there is the usual professional routine to go

through: set up benchmarks, get test runs, decide which code(s) to use.

If programs do need to be developed, you will then have to come up with

time estimates and a project plan. You might have to outsource the coding.

3) Validate the code. This is the most crucial step. The code has to be validated

against as much real world data as possible. It's also worth trying some

'self-tests'. A useful one is a 'symmetry' test. Let's say there are two sensors either

side of a pipe. Physically the sensors are symmetric, and in fact, indistinguishable.

However, your program (algorithm), has to call them 'Sensor 1' and 'Sensor 2', in

order to read in sets of readings. If the sets of readings are identical, try swapping

Sensor 1 for 2 in the program, then the result should be the same.

4) Establish all the test cases needed. These will mostly be of systems that can actually

exist in the real world, but don't be afraid to further develop the theory side here. For

example, it may be worth adding a 'sensor' in a place where it is not physically

possible to put one, and add some simulated readings.

5) Run the codes and produce whatever reports are required. There might then be

a loop round this step and step 4.

My Personal experience of Mathematical Modelling

National Nuclear Corporation 1980-1982

The first main task I had here was to analyse the effect of a sodium-water

reaction on the secondary sodium circuits of a nuclear power station. The theory

was already fairly well developed by Streeter and Wiley, and goes under the

engineering term 'Waterhammer'. Here then, we might call it 'Sodiumhammer'.

In mathematical terms, it is 1D fluid dynamics, both in the sense that cross-sectional

effects across a pipe are ignored, and also that the wave equation is in

x and t, so there is only one space dimension. The main difference in the codes used

is that one is a genuine wave model, allowing high-pressure wave 'transients' to

travel round the system, whereas the other is incompressible, effectively treating the

fluid in the pipes as 'pushrods'. What was interesting was that the two codes were

surprisingly close in agreement after a few milliseconds, i.e. after the transients

had mostly died down. So, in practice, the compressible (wave equation) code was

used for the first second, and the latter 'pushrod' code was used for the time after that.

This was necessary, because the first code, needing a time interval of milliseconds or

less, was impractical to run for more than a second. Now, as the codes were already

there, my main job was to validate them, and to run the cases. This was before

the days of all data being held on computers, so a lot of my time was spent measuring

large-scale engineering drawings with a ruler, and keying in datasets!

The second main job I did was modelling the dumping of sodium from the

secondary sodium circuits of the fast reactor at Dounreay. There were no existing codes

for this, so I developed my own model, a fairly simple one, based on pushrod flow.

This was in Vax Fortran.

I also did a few 'odd jobs' at NNC. For instance, I modelled the effect of an HCDA

(Disruptive Accident) on the re-fuelling ramp (see the graphic files NNC1, 2, 3, 4.gif).

I have included this example because it typifies the process I did, on a smaller scale:

"Analyse the problem, model it, produce a report". This was also in Vax Fortran.

Marconi Space and Defence Systems 1982-1984

The main mathematical modelling task I did here was to study target location

codes, in the period 05.01.1983 to 04.10.1983. My experience here was as an applied

mathematician, and a programmer, rather than a pure theorist, so I did not create any new

filtering techniques, but used what was there as best I could.

A lot of the programs I wrote used 'Kalman filters', but they are not the only technique!

In fact, referring to the graphics MSDS1, 2, 3, 4.gif, you can see I tried techniques

developed by N.M. Blachman from 1968.

Essentially, the mathematical model here is testing out how good noise-filtering

techniques work, using a variety of target scenarios. Referring to the graphic, that code

is for fixed radar stations. Of course, moving receivers is of far more interest, and here

Kalman filter techniques were more useful, especially as they have a

'memory' of all readings taken, but also an ability to respond to new data.

Techniques like posterior probabilities (Bayesian methods) cannot always do this.

I developed all the codes for this study, again in Vax Fortran. An AED

graphics terminal was invaluable here. Validation of the codes is

relatively simple: you take a known (simulated) 'signal', which you then add

random noise to it - then you see how close your estimate is to the true signal.

The essential theory of Kalman filtering is :-

Let xi be the state of the system at some time ti , then

xi+1 = Fxi + Gwi

Where = F, G are matrices, and wi is a noise vector.

Then, if measurements of the state zi are taken,

zi = Hxi + vi , where vi is a noise vector.

The precise difference between system and measured noise has

not always been clear to me, but you can see from above that if

F (and perhaps G, H also ?) is actually updated in the light

of new zi, then there is a 'history' of all previous recordings being

kept in a compact manner.

A good algorithm check is that for i=1, zi should be the best

estimate.

Code development here was more in the way of 'tuning' the code. There has to

be a balance in the filter between over-compensation and too heavy damping;

usually only trial and error can determine this.

Plessey Avionics 1984

See Plessey.gif. The problem here was to model the effect of pitch and roll on the

incoming radar signal to an aircraft. This is by far the most complicated and

demanding, mathematical modelling I have done. In geometric terms, the

model is not that hard to understand: a roll is simply a rotation of the Cd/te system

about the y-axis (through the aircraft's nose). So, if the aircraft rolls by q, then the

incoming signal actually undergoes a rotation of -q about the y-axis. But there is also

pitching to do, in a similar manner, and all this had to be in a working program. This

visualisation of the aircraft's maneouvres has to be kept in mind when you're actually

coding, and whilst you're going through all the different cases. This was what made this

a demanding task, especially in a contracting environment.

Validation was done, like Marconi, in a 'Virtual' way, with test cases being set up

to cover all possible combinations of pitch, roll and yaw.

Shell, EC, Bank of England, BP 1985-1989

These were all contracts, and the mathematical modelling here had already been

done: my role was to debug the systems, and/or move them to a different platform.

See shell.gif, bank.gif, and bp.gif.

Some of the debugging I did at Shell dealt with filtering programs. The gif show some of

the notes I made at the time, to help me understand the theory of filtering. As the notes

show, this is basically bandpass filtering. See Shell.gif.

At the Bank of England Printing Works, my task was actually a systems analysis

role of moving the Banknote Design system to another computer and graphics

device. As part of this, I had to understand the internals of the CAD process:

the graphic shows some notes I made of this. See bank.gif.

At BP, I was again doing debugging of seismic analysis programs. See bp.gif:-

this shows some notes I made on how seismic data is gathered. See also

http://www.inverse-problems.com for more recent work on this..

Inverse Problems 1990-2001

See secpat.txt, smudgit.for and ip1.gif.

The two mathematical modelling tasks I had were of modelling

Relief patterns, and of simulating the 'smudging' process of the Digiset, which is

a microfiche printing device.

The specification secpat.txt shows the proposal I made for a computer-based

simulation of a printing process currently done by two 'cross-pencils'. It has the

effect of a 3D printing illusion. The graphic bem1.gif is the last page of this spec.

The program 'smudgit.for' shows a simulation I wrote of a photographic

'smudging' problem. The Digiset, which is essentially a photographic device,

would tend to 'merge' any two adjoining dots. This made font design a bit of a

nightmare, since a font looking perfectly ok on paper would come out

'blotchy' on microfiche. The solution is to 'shave' bits off the thicker lines,

but this was very much a hit-and-miss procedure. I used the attached program

to see how the Digiset would smudge a font, without having to actually produce

the fiche. This allowed people to develop fonts and get the 'shaving' done

directly on their workstations, without having to keep waiting for Digiset time.

Validation was really the same as development here: with a microfiche in front

of you, adjust the code till the smudging looks as close as it can be to what's on the

fiche.

Finally, ip.gif shows my most recent, 'Personal' study, into Inverse Problems. The

graphic shows progress through time under the diffusion equation. Go to

inverse-problems.com for the full story on that.

If you have any further queries on my work in the period 1980-2001,

please leave a message on the Web page http://www.inverse-problems.com,

under 'Feedback'.

Bob Marlow. 18.08.2001.