# Mathematical functions in performance diagrams

Thread Starter

Join Date: Sep 2009

Location: 64N, 020E

Age: 53

Posts: 38

**Mathematical functions in performance diagrams**

I don't know if this is the right forum, so mods feel free to delete my question if it doesn't fit.

I have taken upon myself to digitise some scanned old performance diagrams, e.g. roll distance as function of mass and OAT, TAS as function of power setting, etc. The obvious goal is to allow the computations to be performed in a computer instead of on paper (that is: almost never). I have written some tools to do measurements of the line intersections in the documents, so consider that problem solved. However, what functions should I fit to the measured coordinates? In select cases, a straight line is valid, but most often not. After all, the functions are typically solution to differential equations...

Could you advise me as to what functions are reasonable to adapt to the measured data? I would like to avoid fitting simple polynomials since they may produce oscillations between measured points. I would also like to avoid interpolating cubic splines, since they...interpolate my measured points, and the measurements are of varying quality. Ideally, I would like a closed-formula for each type of curve, but I know that might be a tall order. My goal is to automate the computations, so the required precision is in the order of what you would produce with a pen and a ruler.

So, what do you suggest I do? My background is in computing science, but the physics behind these curves is beyond me. Any suggestions will be greatly appreciated.

As an example I attach the "roll distance as a function of OAT, mass, and wind" for our PA-28-161. The image shows the actual document from our POH. I apologise for the Swedish labels, but this is after all our legal document. Left panel shows OAT and pressure altitude, middle

panel is take off mass, and rightmost panel shows wind.

I have taken upon myself to digitise some scanned old performance diagrams, e.g. roll distance as function of mass and OAT, TAS as function of power setting, etc. The obvious goal is to allow the computations to be performed in a computer instead of on paper (that is: almost never). I have written some tools to do measurements of the line intersections in the documents, so consider that problem solved. However, what functions should I fit to the measured coordinates? In select cases, a straight line is valid, but most often not. After all, the functions are typically solution to differential equations...

Could you advise me as to what functions are reasonable to adapt to the measured data? I would like to avoid fitting simple polynomials since they may produce oscillations between measured points. I would also like to avoid interpolating cubic splines, since they...interpolate my measured points, and the measurements are of varying quality. Ideally, I would like a closed-formula for each type of curve, but I know that might be a tall order. My goal is to automate the computations, so the required precision is in the order of what you would produce with a pen and a ruler.

So, what do you suggest I do? My background is in computing science, but the physics behind these curves is beyond me. Any suggestions will be greatly appreciated.

As an example I attach the "roll distance as a function of OAT, mass, and wind" for our PA-28-161. The image shows the actual document from our POH. I apologise for the Swedish labels, but this is after all our legal document. Left panel shows OAT and pressure altitude, middle

panel is take off mass, and rightmost panel shows wind.

Join Date: Apr 1998

Location: Mesopotamos

Posts: 1,602

We used to do a lot of these kinds of things when I was studying for my Applied Mathematics degree (which I received with Distinction). From what I can remember, if the plots were complex you would break it down into parts and use different techniques for each - the end result was some big complex formula.

Then they invented libraries to do much of that manual leg work - e.g. MatLab.

These days compute resources are so cheap that if you don't need super high precision then a lookup table will work just as well and probably even be quicker. They are also easier to understand and maintain - unless you enjoy a challenge.

Then they invented libraries to do much of that manual leg work - e.g. MatLab.

These days compute resources are so cheap that if you don't need super high precision then a lookup table will work just as well and probably even be quicker. They are also easier to understand and maintain - unless you enjoy a challenge.

Join Date: Dec 2003

Location: Tring, UK

Posts: 1,591

I’m sure one of the regulars who knows much more than I do about this will be along shortly to put us right. However, I was under the impression that the sort of interpolation/abstraction which might appear sensible on the surface (there is a non-approved app for stopping distance on my current type, I think using the same methods as described above to produce a result) is not deemed appropriate by the authorities as the published data can only be used “as is”, or with strict linear interpolation when allowed.

Thread Starter

Join Date: Sep 2009

Location: 64N, 020E

Age: 53

Posts: 38

Fullwings, yes, I am aware of that fact. However, my intention is to have the software use the raw image of the diagram and effectively draw lines on top of it that correspond to the computation in question. This annotated image could then be printed and should in my understanding be legal. After all, what is the difference between a copy of the published diagram with lines produced by manually using a ruler and a pencil and the same diagram where the lines have been generated by software?

Join Date: Jun 2009

Location: florida

Age: 78

Posts: 1,412

When you get the equations to agree with the observed data let us all know and we can donate to get the exclusive rights to all furure designs by every manufacturer and defense agency in the world.

All you can do now is use polynomial curve fitting sfwe to show great curves at the briefings for the generals and admirals.

When you can use actual stoke-napier and other high tech fuid dynamic stuff to predict the performance of your plane, let us know. I want in!

All you can do now is use polynomial curve fitting sfwe to show great curves at the briefings for the generals and admirals.

When you can use actual stoke-napier and other high tech fuid dynamic stuff to predict the performance of your plane, let us know. I want in!

Join Date: Sep 2020

Location: Melbourne, Australia

Posts: 53

However, I was under the impression that the sort of interpolation/abstraction which might appear sensible on the surface (there is a non-approved app for stopping distance on my current type, I think using the same methods as described above to produce a result) is not deemed appropriate by the authorities as the published data can only be used “as is”

(Things will change here when the dastardly new Part 91 comes into force and only approved data must be used.)

I guess that pilots would be expected to use linear interpolation but nothing stopping anyone from using a better fit to the data (but don't do it in an exam as that risks getting the wrong answer in a multi-choice question). Extrapolation is another discussion entirely.

Join Date: Dec 2003

Location: Tring, UK

Posts: 1,591

NiclasB

To me it sounds eminently reasonable. You might get a different opinion from your insurance company, design authority and/or relevant CAA.

Think how long it took to convince the powers-that-be that a digital map had a least the utility of a paper one and could act as a substitute...

To me it sounds eminently reasonable. You might get a different opinion from your insurance company, design authority and/or relevant CAA.

Think how long it took to convince the powers-that-be that a digital map had a least the utility of a paper one and could act as a substitute...

Moderator

Join Date: Apr 2001

Location: various places .....

Posts: 6,718

This is an exercise which occupied me with some regularity in years gone by.

First, might I suggest that you define your requirement –

(a) if essentially for entertainment, then go for it. The basic equations are available in any of the standard aerodynamics/performance books and flight test manuals.

Unfortunately, there are various assumptions which are built into the equations so that, when we run the initial sums for an aircraft, and then test it to determine whether the sums are fit for purpose, invariably there are bits of the envelope where things don’t quite run true to equation form. What to do ? The OEM has to fudge things a bit so that the equations (and AFM/POH stuff) match what the aircraft does to the required accuracy ... but then doesn’t release the process details for general consumption.

The end result is that trying to figure out the equations from first principles is a bit of a fool’s errand.

(b) if you are looking at commercial work, there are two competing interests –

(i) we must not be non-conservative – if something goes awry and your sums can be implicated, then you could find yourself in a difficult place at the subsequent investigation and court cases – not good. So, make sure that you don’t run non-conservatively with respect to the AFM/POH performance data.

(ii) we must not be conservative (to any degree beyond that negotiated with the customer) because that affects the customer’s profitability.

Add (i) and (ii) and we get the conclusion that we necessarily must replicate what is in the AFM/POH. To this end, scanning of graphs and zooming in to figure point values is useful. Indeed, the usual limit is the printing accuracy of the OEM AFM/POH chart data – a little easier with tabular data but data precision then can become a concern.

Now, you are not likely to find serendipity and come up with the “correct” sums. So, you are stuck with simulating the charts (tables, whatever) either with regressions or look up tables and interpolation.

If you opt for regression then, as a general rule, forget multivariate regressions – it’s not going to work. So you are faced with some set of regressions of the printed data and intermediate calculated regressions or interpolations (as your fancy might run) to get a adequately precise and accurate estimate of what the AFM/POH says for any given data point.

Then, you will usually find that the data sets (especially for heavy metal) contain various discontinuities and that just makes everything more complex.

So far as polynomials doing funny things between data points is concerned, prefer low order polynomials/high data density and run a check on each and every segment, as modeled, to detect anything out of order in the sums. (It’s not often that first order fits would be of much use).

FWIW, my approach (and it worked fine, although an absolute pita to set up) was to model each of the AFM/POH data lines, using low order polynomials segmented as I saw fit and making due allowance for discontinuities. Rather than use interpolation routines for the particular variable, I choose to establish a suitable set of data points from my equations and run a suitable minimum order polynomial through that data set from which I arrived at my answer for the required data point.

The basic test was that I couldn’t run the sums better mandraulically than I could do with my program. If I could, it was a case of back to the drawing board until the end result achieved that goal. Customers were happy.

Of course, if you can buy a program from the OEM, that probably will be the better alternative as setting up the thing from scratch is not a cheap exercise.

One neat approach was followed by DCA (a predecessor regulator to CASA). When the early electronic plotters came onto the market, the chaps in the performance group came up with some fairly simple equations which were the basis of the old P-chart graphs. These, in general, were thrown out with the baby and bathwater post Yates Report some years ago - pity. Should you need to play with one of these, then the old DCA engineering reports give you the equations and you would have little trouble setting them up in the manner you so obviously desire.

The PA28 chart cited is dead simple and should be a relatively quick exercise to set up.

So long as your program always produces output which is not non-conservative with respect to the AFM/POH data, no-one is going to have any interest in trying to cause you grief. Be non-conservative and things can go from bad to worse in court after the mishap, etc.

Dave highlights one significant problem in respect of some POH data ...

First, might I suggest that you define your requirement –

(a) if essentially for entertainment, then go for it. The basic equations are available in any of the standard aerodynamics/performance books and flight test manuals.

Unfortunately, there are various assumptions which are built into the equations so that, when we run the initial sums for an aircraft, and then test it to determine whether the sums are fit for purpose, invariably there are bits of the envelope where things don’t quite run true to equation form. What to do ? The OEM has to fudge things a bit so that the equations (and AFM/POH stuff) match what the aircraft does to the required accuracy ... but then doesn’t release the process details for general consumption.

The end result is that trying to figure out the equations from first principles is a bit of a fool’s errand.

(b) if you are looking at commercial work, there are two competing interests –

(i) we must not be non-conservative – if something goes awry and your sums can be implicated, then you could find yourself in a difficult place at the subsequent investigation and court cases – not good. So, make sure that you don’t run non-conservatively with respect to the AFM/POH performance data.

(ii) we must not be conservative (to any degree beyond that negotiated with the customer) because that affects the customer’s profitability.

Add (i) and (ii) and we get the conclusion that we necessarily must replicate what is in the AFM/POH. To this end, scanning of graphs and zooming in to figure point values is useful. Indeed, the usual limit is the printing accuracy of the OEM AFM/POH chart data – a little easier with tabular data but data precision then can become a concern.

Now, you are not likely to find serendipity and come up with the “correct” sums. So, you are stuck with simulating the charts (tables, whatever) either with regressions or look up tables and interpolation.

If you opt for regression then, as a general rule, forget multivariate regressions – it’s not going to work. So you are faced with some set of regressions of the printed data and intermediate calculated regressions or interpolations (as your fancy might run) to get a adequately precise and accurate estimate of what the AFM/POH says for any given data point.

Then, you will usually find that the data sets (especially for heavy metal) contain various discontinuities and that just makes everything more complex.

So far as polynomials doing funny things between data points is concerned, prefer low order polynomials/high data density and run a check on each and every segment, as modeled, to detect anything out of order in the sums. (It’s not often that first order fits would be of much use).

FWIW, my approach (and it worked fine, although an absolute pita to set up) was to model each of the AFM/POH data lines, using low order polynomials segmented as I saw fit and making due allowance for discontinuities. Rather than use interpolation routines for the particular variable, I choose to establish a suitable set of data points from my equations and run a suitable minimum order polynomial through that data set from which I arrived at my answer for the required data point.

The basic test was that I couldn’t run the sums better mandraulically than I could do with my program. If I could, it was a case of back to the drawing board until the end result achieved that goal. Customers were happy.

Of course, if you can buy a program from the OEM, that probably will be the better alternative as setting up the thing from scratch is not a cheap exercise.

**Australian “Old” Charts**One neat approach was followed by DCA (a predecessor regulator to CASA). When the early electronic plotters came onto the market, the chaps in the performance group came up with some fairly simple equations which were the basis of the old P-chart graphs. These, in general, were thrown out with the baby and bathwater post Yates Report some years ago - pity. Should you need to play with one of these, then the old DCA engineering reports give you the equations and you would have little trouble setting them up in the manner you so obviously desire.

**Piper Chart as Attached**The PA28 chart cited is dead simple and should be a relatively quick exercise to set up.

**You might get a different opinion from your insurance company, design authority and/or relevant CAA**So long as your program always produces output which is not non-conservative with respect to the AFM/POH data, no-one is going to have any interest in trying to cause you grief. Be non-conservative and things can go from bad to worse in court after the mishap, etc.

Dave highlights one significant problem in respect of some POH data ...

Join Date: Feb 2010

Posts: 1,758

I fly various survey and ISR aircraft which have non-standard performance due to drag and weight of various sensors and antenna. For planning ferry flights I can’t use the FOM performance figures so I have had to build my own performance models using statistical tools available in spreadsheet programs. INDEX(LINEST) is the most useful - you can develop very accurate models of performance using observed values. So for example, when I start testing a new aircraft variant, I start noting down observed IAS, Density Altitude and power settings over a wide range of density altitudes and speeds. Using LINEST I can then accurately predict what power setting will be required for any IAS and DA. I made another LINEST model for fuel flow against power setting, and so combing them I get range. Modelling climb and descent is more complex but using similar tools and a bit of calculus I got that very accurately modelled too. But now I have a very accurate program that can produce a perfectly accurate fuel log for ferry flights. I can just input the predicted wind, temps, and QNH and FL and can then optimise exactly the best IAS to fly the leg arriving at destination with regulation reserves and my chosen supp fuel.

Join Date: May 2001

Location: Milliways

Posts: 86

A few years ago I converted the performance graphs in my SEP POH to Excel. Process I used was take each graph in turn, import it into Engauge Digitizer, use an Excel worksheet to 'solve' each line on each graph, and then assemble all the resultant formulas into a master spreadsheet. Somewhat tedious but effective, PM with an e-mail address if you want some of the (quite large) files

Join Date: Feb 2010

Posts: 1,758

You can achieve what you want (and very accurately too) by using multivariate regression, as I suggested in my earlier post.

You need to extract data points from the diagram, then put it into LINEST into a spreadsheet program to build a multi-order polynomial of your variables. This will use least-squares regression to solve the best fit multi-dimensional curve to fit the data.

I suggest you aim to create with a low-order polynomial such as:

D=aT + bT^2 + cM + dM^2 + eM^3 + fW +gW^2

Where D= roll distance, M=Mass, W=wind, abcdefg are constants.

You will then need to somewhat laboriously and tediously extract manual data points from the diagram. The more data points you extract, the better. But you MUST input data points at maximum and minimum values of T, M, and W that you will encounter (and combinations of them). This is because the best fit curve will be very accurate interpolating within your data limits, but wildly inaccurate when extrapolating outside your limits.

You then put all your data into a spreadsheet and use LINEST to solve the values of your contants abcdefg. You then have a mathematical formula for your roll distance.

The more data points you add, the more accurate it will be. You can also make a higher order polynomial if necessary.

You need to extract data points from the diagram, then put it into LINEST into a spreadsheet program to build a multi-order polynomial of your variables. This will use least-squares regression to solve the best fit multi-dimensional curve to fit the data.

I suggest you aim to create with a low-order polynomial such as:

D=aT + bT^2 + cM + dM^2 + eM^3 + fW +gW^2

Where D= roll distance, M=Mass, W=wind, abcdefg are constants.

You will then need to somewhat laboriously and tediously extract manual data points from the diagram. The more data points you extract, the better. But you MUST input data points at maximum and minimum values of T, M, and W that you will encounter (and combinations of them). This is because the best fit curve will be very accurate interpolating within your data limits, but wildly inaccurate when extrapolating outside your limits.

You then put all your data into a spreadsheet and use LINEST to solve the values of your contants abcdefg. You then have a mathematical formula for your roll distance.

The more data points you add, the more accurate it will be. You can also make a higher order polynomial if necessary.

Moderator

Join Date: Apr 2001

Location: various places .....

Posts: 6,718

**Regressions**

**You can achieve what you want (and very accurately too) by using multivariate regression, as I suggested in my earlier post.**Ah, yes and no. We are getting into apples and oranges discussions.

The OP queried simulation of existing and approved AFM/POH data. For this sort of stuff, the aim must be to replicate the approved data to a very high level of precision and accuracy, lest the legal/regulator censure machine strike thee down. When I first started playing with such things back in the late 60s/early 70s, computing was in its public infancy and available multivariate analyses didn't cut the mustard from the point of both accuracy and precision. I revisited those techniques, again, late 70s/early 80s with a similar observation. Hence my comment regarding multivariate analyses. Caveat - for simple graphical data, multivariate analysis often will provide acceptable results so the OP's cited example may well suit that technique. For more complex presentations, it all gets a tad difficult to achieve acceptable accuracy and precision.

Your story is somewhat different. For reasons which totally elude me, you appear to have a significantly modified aircraft for which the AFM/POH performance data was not updated. It is usual for any mod work to be accompanied by updated performance supplement information if such changes are other than trivial. For instance, I can recall several multis for which the MTOW figure was reduced due to WAT limit constraints associated with aerial forests. As a sideline interest, perhaps you can provide further information on this aspect of your post - if appropriate, to me by PM for my interest.

Effectively, you are working to your own defacto design standards, are the arbiter of accuracy and precision, and the arbiter of end product acceptability. Therefore, you can do the work however you might choose. In such a situation, clearly, the use of multivariate analysis can be made to be fine.

**I remember doing least squares regression by hand**Dear, oh dear ... bragging, again .. ah, the memories of machine code and assembler programming .... I couldn't do that stuff these days if my life depended on it.

Join Date: Dec 2006

Location: The No Transgression Zone

Posts: 2,424

"I couldn't do that stuff these days if my life depended on it."

For quite a few things, me either JT

I don't know if this is prudent to the discussion here but you can do an error analysis in order to see the boundaries of your function ie the min and max points.

Based on what JT said. I think that the accuracy or precision will not improve, or match the OEM raw data but at least as an academic excersize it's quite interesting.

For quite a few things, me either JT

I don't know if this is prudent to the discussion here but you can do an error analysis in order to see the boundaries of your function ie the min and max points.

Based on what JT said. I think that the accuracy or precision will not improve, or match the OEM raw data but at least as an academic excersize it's quite interesting.

*Last edited by Pugilistic Animus; 25th May 2021 at 14:31.*

Join Date: Feb 2010

Posts: 1,758

The OP is flying a light aircraft AFAICR. He is just trying to emulate the (somewhat imprecise) performance figures derived by the manufacturer test flying program, done eons ago on a brand new aircraft with a TP at the controls. Even with the manufacturers and regulators adjustments to cater for more realistic performance of a production aircraft with a moderately proficient pilot the performance figures are pretty vague at best.

If he follows my suggestion he will be able to derive a formula that will be well within the accuracy that can be derived from that manufacturers graph, just given the thickness of those ink-drawn lines…

The great advantage of my suggestions is that it self-adjusts as you get more data. So if the results of your derived formula disagree with what you calculate from the FOM data - just input the new data point into your data base and the polynomial constants will adjust. But reallygiven the simplicity of that performance graph (largely linear with temp and wind, maybe second order with mass) a very simple formula derived from 20-30 data points will be very accurate (just remember my warning about extrapolation outside of limits!).

If he follows my suggestion he will be able to derive a formula that will be well within the accuracy that can be derived from that manufacturers graph, just given the thickness of those ink-drawn lines…

The great advantage of my suggestions is that it self-adjusts as you get more data. So if the results of your derived formula disagree with what you calculate from the FOM data - just input the new data point into your data base and the polynomial constants will adjust. But reallygiven the simplicity of that performance graph (largely linear with temp and wind, maybe second order with mass) a very simple formula derived from 20-30 data points will be very accurate (just remember my warning about extrapolation outside of limits!).

Moderator

Join Date: Apr 2001

Location: various places .....

Posts: 6,718

**The OP is flying a light aircraft AFAICR.**Clearly the case from his cited example.

**He is just trying to emulate the (somewhat imprecise) performance figures derived by the manufacturer test flying program,**Probably not quite the case. The usual design approaches adopted produce reasonably accurate and, certainly, fit for purpose, data for light aircraft. The more rigorous techniques applied by the heavy iron OEMs result in quite accurate performance information. Again, I would emphasise that there is a major difference between simulating AFM/POH data, for which one may attract a liability, and setting up routine monitoring protocols. For instance, in respect of the latter, every airline and the majority of other commercial operators will keep very tight records of inflight performance and maintain/update flight planning data to be used by pilots on the basis of individual tail history. However, takeoff and landing data in the AFM/POH is the basis for the pilot's compliance with legislated requirements. For example, in Australia, CAR (1988) 138 (1) imposes a specific responsibility on the pilot to comply with requirements published in the AFM/POH - I would be surprised to see a substantially different requirement in other jurisdictions.

**If he follows my suggestion he will be able to derive a formula that will be well within the accuracy**This I would dispute in some, but not all, examples of AFM charts. It all depends on how the charts are developed. More importantly, the simulation has to produce accurate and defensible values for interpolation between printed data. So long as the simulated data can be defended conclusively in court, all is fine.

**The great advantage of my suggestions is that it self-adjusts as you get more data.**Fine for route performance monitoring but not, I suggest, for the takeoff and landing concerns of the OP.

**given the simplicity of that performance graph (largely linear with temp and wind, maybe second order with mass)**I presume you are referring to the cited example. Certainly a simple presentation but not much in it will be linear, I fear. Generally, in my experience, one is at risk going beyond a third order equation ... nor is that necessary. If it appears to be so, one needs to segment the line data.

**remember my warning about extrapolation outside of limits!**This comment causes me some concern as it infers the use of needlessly high order equation simulations. One needs to keep in mind that extrapolation of certification data is a no-no and would be difficult to defend in court in the extreme.

I guess we just need to agree to disagree on a couple of points ?

Join Date: Feb 2010

Posts: 1,758

I get the impression you are coming at the problem from a certification point of view, which is not really the objective of the OP or indeed myself.

In my case I fly a collection of DA42MPPs for aerial work. There are three different engine-types in our fleet (CD-135, CD-155, and AE300), and multiple different aircraft appendages (fat square nose for survey camera, a long bellypod for lidar work, a nose with EO/IR camera (which sometimes as to be flown with even draggier cover on ferry flights), sometimes we have a scotty uplink dome on the back, sometimes radar or GSM intercept equipment on the underside, and other combinations. All of these variants have different performance. When we get a new variant, I need to work out quickly a performance model as I need to fly them on a long delivery trip, usually through Africa where there can be long distances between airfields, and one occasion to South America - and there is no usable data in the FOM. To do this I just select the data sets of "nearest" previous configurations that I have flown previously (these have around a thousand real observed data points throughout flight and engine performance envelop) -i and build a new approximate model. In an hour or so of test flying, I can get enough real observed performance to massage the data sets into a more precise model (there are good statistical tools in spreadsheets to do this). I then have an accurate enough model to route-plan so that know what power setting I will have to fly to achieve any given leg in quickest time but arrive with regulatory fuel reserves. As I fly a new variant I keep adding real observed data to the model, and prune out the old data. This eventually leads to a highly accurate model.

Also for survey and ISR flying, FOM performance data is not relevant. Eg for a survey profile, I may need to fly 250nm to the AOI, then descend to survey GPS elevation, fly the AOI at a constant groundspeed, then fly back and arrive with regulatory minimums. This sort of profile is really impossible to plan efficiently without a decent performance model. But with the accuracy of the data I now have, I can make a fuel plan in around five minutes prior to a flight, inputting just winds, QNH, temps and pressures from Windy, and then know in advance exactly what power settings and FLs I need to select on outgoing in incoming legs, exactly what PA I need to descend to for the survey, what power setting I will need to achieve the correct groundspeed, and exactly how long I can spend on the productive survey flying. When I fly the profile, I get back to base within a few minutes of predicted time, and within 2% of my predicted final reserve fuel. It is thus greatly improves the productivity of our aircraft per engine hour.

I can even adapt the methodology quickly to entirely new types - a few years ago we got an AS350 for some survey work with external LIDARs and cameras bolted to the skids - within just a few hours of test-flying to gather observed data I could adapt the model to the helicopter.

I don't see why there should be a "risk" - as long as you do not attempt to extrapolate outside of your data set. For my model of power setting as a function of IAS, Density Altitude and mass I use fourth order of all three variables. R comes out at 0.98. I don't see any harm in using higher orders - it just increases accuracy. I am using fifth order for some of the other parts of the model (particularly engine performance at high altitudes) For the OPs example, I would probably use third order for all three variables - but suggested he start with lower orders if he is not familiar with LINEST function.

I am not using my calculations for certifications and it is purely for my personal use, so no chance of ending up in court! But I build into the model safeguards to make it impossible to accidentally extrapolate outside of the observed data set. As long as you stay within your data set, higher orders greatly increase accuracy. Out of curiosity I have got data right through the flight envelope and the curve near the bottom of the lift/drag curve, or near the stall would be really inaccurate if I did not use high orders.

In my case I fly a collection of DA42MPPs for aerial work. There are three different engine-types in our fleet (CD-135, CD-155, and AE300), and multiple different aircraft appendages (fat square nose for survey camera, a long bellypod for lidar work, a nose with EO/IR camera (which sometimes as to be flown with even draggier cover on ferry flights), sometimes we have a scotty uplink dome on the back, sometimes radar or GSM intercept equipment on the underside, and other combinations. All of these variants have different performance. When we get a new variant, I need to work out quickly a performance model as I need to fly them on a long delivery trip, usually through Africa where there can be long distances between airfields, and one occasion to South America - and there is no usable data in the FOM. To do this I just select the data sets of "nearest" previous configurations that I have flown previously (these have around a thousand real observed data points throughout flight and engine performance envelop) -i and build a new approximate model. In an hour or so of test flying, I can get enough real observed performance to massage the data sets into a more precise model (there are good statistical tools in spreadsheets to do this). I then have an accurate enough model to route-plan so that know what power setting I will have to fly to achieve any given leg in quickest time but arrive with regulatory fuel reserves. As I fly a new variant I keep adding real observed data to the model, and prune out the old data. This eventually leads to a highly accurate model.

Also for survey and ISR flying, FOM performance data is not relevant. Eg for a survey profile, I may need to fly 250nm to the AOI, then descend to survey GPS elevation, fly the AOI at a constant groundspeed, then fly back and arrive with regulatory minimums. This sort of profile is really impossible to plan efficiently without a decent performance model. But with the accuracy of the data I now have, I can make a fuel plan in around five minutes prior to a flight, inputting just winds, QNH, temps and pressures from Windy, and then know in advance exactly what power settings and FLs I need to select on outgoing in incoming legs, exactly what PA I need to descend to for the survey, what power setting I will need to achieve the correct groundspeed, and exactly how long I can spend on the productive survey flying. When I fly the profile, I get back to base within a few minutes of predicted time, and within 2% of my predicted final reserve fuel. It is thus greatly improves the productivity of our aircraft per engine hour.

I can even adapt the methodology quickly to entirely new types - a few years ago we got an AS350 for some survey work with external LIDARs and cameras bolted to the skids - within just a few hours of test-flying to gather observed data I could adapt the model to the helicopter.

**I presume you are referring to the cited example. Certainly a simple presentation but not much in it will be linear, I fear. Generally, in my experience, one is at risk going beyond a third order equation ... nor is that necessary. If it appears to be so, one needs to segment the line data.**I don't see why there should be a "risk" - as long as you do not attempt to extrapolate outside of your data set. For my model of power setting as a function of IAS, Density Altitude and mass I use fourth order of all three variables. R comes out at 0.98. I don't see any harm in using higher orders - it just increases accuracy. I am using fifth order for some of the other parts of the model (particularly engine performance at high altitudes) For the OPs example, I would probably use third order for all three variables - but suggested he start with lower orders if he is not familiar with LINEST function.

**This comment causes me some concern as it infers the use of needlessly high order equation simulations. One needs to keep in mind that extrapolation of certification data is a no-no and would be difficult to defend in court in the extreme.**I am not using my calculations for certifications and it is purely for my personal use, so no chance of ending up in court! But I build into the model safeguards to make it impossible to accidentally extrapolate outside of the observed data set. As long as you stay within your data set, higher orders greatly increase accuracy. Out of curiosity I have got data right through the flight envelope and the curve near the bottom of the lift/drag curve, or near the stall would be really inaccurate if I did not use high orders.

Moderator

Join Date: Apr 2001

Location: various places .....

Posts: 6,718

**Regressions**

Certainly an interesting discussion.

Both certification (in that I have been involved in developing AFM performance data in the past) and operations engineering (in that I have spent considerable time in years gone by as an airline operations engineer). As I suggested earlier, there are two distinct areas of interest; takeoff and landing data simulations, where accuracy and precision is necessary for the avoidance of potential legal concerns, and routine performance where the operator is concerned with flight planning and flight following.

As I queried previously, these are significant mods and would, in the usual course of events, be associated with STCs or similar protocols and revised performance data in the AFM/POH supplement(s). Are you suggesting that such did not occur ? Inconceivable within a normally disciplined operation overseen by a competent Regulatory Authority. On that point, of course, you have not identified the State under which the aircraft are registered. The supplements should cover revised takeoff and landing performance but, in all likelihood, may have been a tad relaxed for planning data.

That's all fine and, philosophically, no different to what legions of airline operations engineers do for their daily crust.

Likewise. It is acknowledged that lighties often have a few gaps in the area of flight planning data.

The risk with which I was always concerned related to the degree to which the model might be unpredictable in areas of the envelope, the probability increasing with order. This is particularly the case with multivariate analyses where the subsequent interpolation requirements require significant development testing to ensure that the thing is well behaved. Very early on, I came to the decision that a safer strategy was to stick to lower orders and, where necessary, just segment the data sets to achieve whatever level of precision and accuracy I was after. From similar reasoning, I abandoned multivariate analyses and used low order interpolation routines which I knew weren't going to bite me on the tail in operational use.

Forcing the use of low orders, associated with limited manual extrapolation for development, provided the facility to extrapolate in operations to determine non-critical limitations which, on occasion, were useful for assessment of operational penalties for degraded systems operation.

I'd avoid that like the plague but let's just agree to disagree on preferred strategies ?

I see no fundamental difference - the potential problem relates to model discipline in use. The higher the order the higher the testing load.

So long as the data set is significantly large - I just don't see any real advantage in reducing the number of data sets to a minimum when it is so easy to set up additional lower order models to avoid the problem altogether.

By segmenting the datasets the problem doesn't arise

**you are coming at the problem from a certification point of view**Both certification (in that I have been involved in developing AFM performance data in the past) and operations engineering (in that I have spent considerable time in years gone by as an airline operations engineer). As I suggested earlier, there are two distinct areas of interest; takeoff and landing data simulations, where accuracy and precision is necessary for the avoidance of potential legal concerns, and routine performance where the operator is concerned with flight planning and flight following.

**multiple different aircraft appendages ..... All of these variants have different performance**As I queried previously, these are significant mods and would, in the usual course of events, be associated with STCs or similar protocols and revised performance data in the AFM/POH supplement(s). Are you suggesting that such did not occur ? Inconceivable within a normally disciplined operation overseen by a competent Regulatory Authority. On that point, of course, you have not identified the State under which the aircraft are registered. The supplements should cover revised takeoff and landing performance but, in all likelihood, may have been a tad relaxed for planning data.

*. suggests the latter to be the case. What is the situation for the takeoff and landing data ?***there is no usable data in the FOM****To do this**That's all fine and, philosophically, no different to what legions of airline operations engineers do for their daily crust.

**Also for survey**Likewise. It is acknowledged that lighties often have a few gaps in the area of flight planning data.

**I don't see why there should be a "risk"**The risk with which I was always concerned related to the degree to which the model might be unpredictable in areas of the envelope, the probability increasing with order. This is particularly the case with multivariate analyses where the subsequent interpolation requirements require significant development testing to ensure that the thing is well behaved. Very early on, I came to the decision that a safer strategy was to stick to lower orders and, where necessary, just segment the data sets to achieve whatever level of precision and accuracy I was after. From similar reasoning, I abandoned multivariate analyses and used low order interpolation routines which I knew weren't going to bite me on the tail in operational use.

**as long as you do not attempt to extrapolate outside of your data set.**Forcing the use of low orders, associated with limited manual extrapolation for development, provided the facility to extrapolate in operations to determine non-critical limitations which, on occasion, were useful for assessment of operational penalties for degraded systems operation.

**... I use fourth order ... I am using fifth order for some of the other parts of the model**I'd avoid that like the plague but let's just agree to disagree on preferred strategies ?

**I am not using my calculations for certifications**I see no fundamental difference - the potential problem relates to model discipline in use. The higher the order the higher the testing load.

**As long as you stay within your data set, higher orders greatly increase accuracy.**So long as the data set is significantly large - I just don't see any real advantage in reducing the number of data sets to a minimum when it is so easy to set up additional lower order models to avoid the problem altogether.

**near the stall would be really inaccurate if I did not use high orders.**By segmenting the datasets the problem doesn't arise