Go Back  PPRuNe Forums > Ground & Other Ops Forums > Safety, CRM, QA & Emergency Response Planning
Reload this Page >

Computers in the cockpit and the safety of aviation

Wikiposts
Search
Safety, CRM, QA & Emergency Response Planning A wide ranging forum for issues facing Aviation Professionals and Academics

Computers in the cockpit and the safety of aviation

Thread Tools
 
Search this Thread
 
Old 23rd Jan 2011, 18:41
  #121 (permalink)  
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
Peter, raw data or otherwise, those at the front of the aircraft are going to use whatever is presented. Thus as you know, a key aspect of certification is that this data must not be hazardously misleading.
There will, as has been seen throughout aviation history and unfortunately we tend to focus on, the rare exception of low accuracy data (computer ‘glitch’) often resulting in accidents. If we are discussing these then it may be beneficial to look at the complete safety arena, e.g comparing the accident rate from computer problems against overrun accidents – both from a human and technological viewpoint.

However, the debate (as usual) comes from just a few views. Predominantly there is a division between the academic (certification) / engineering view and that of the operating crew.
Many issues are in assumptions originating from these views. The designer/certification engineer may assume a particular level of pilot knowledge and proficiency, whilst the pilot assumes ‘fool proof’, accurate information. Perhaps these are extreme examples, but each view builds up a store of false information or bias about a particular operation. Accidents often originate from these beliefs.
Also it’s the assumed context in which systems operate that can cause problems. An example, yet to be proven, might be the rare, short periods of flight without reliable airspeed. The assumption that pilots can manage with pitch/power has been shown historically to be good enough in a benign context (aircraft type and weather), but in a context of a highly augment aircraft with multiple failures, at night, with an relatively ‘inexperienced’ crew (context), and when penetrating a line of storms, it may be too much to expect.

Even then, there may still be two views; the pilots suggest design/certification action, but design/certification suggests more operator/pilot training.
It matters little in these high level safety debates whether the data is raw or ‘enhanced’; in an emergency the pilot seeks a compromise solution, as no doubt does the design engineer before certification.
Perhaps both fractions require a better understanding of each others viewpoint and capabilities; the resultant educated compromise will benefit safety.
alf5071h is offline  
Old 23rd Jan 2011, 19:24
  #122 (permalink)  
Moderator
 
Join Date: Apr 2001
Location: various places .....
Posts: 7,187
Received 94 Likes on 63 Posts
Perhaps both fractions require a better understanding of each others viewpoint and capabilities; the resultant educated compromise will benefit safety

Which is why there will always be a role for the certification TP.
john_tullamarine is online now  
Old 24th Jan 2011, 02:01
  #123 (permalink)  
 
Join Date: Jun 2010
Location: USA
Posts: 245
Likes: 0
Received 0 Likes on 0 Posts
That is often what a good hazard&risk analysis of data corruption would suggest is the best information to provide to the eyes in the front seat
Aye, and there's the rub.

Where in the traditional pantheon of aviate, navigate, communicate does risk analysis fit in. I remember a comment I once read from the captain of the UA Flight 232 when asked by a reporter how he knew what to do after he lost control of all his flight surfaces. His response, "We'll we just tried the first thing that came into our heads and thankfully it worked," [That's a paraphrase but it gets the gist].

Risk in inherent in complex systems. And where there is risk, if a man is honest, there is luck. Good luck. Bad luck. I'm not sure that the wise course of action is to toss the burden of risk analysis of complex data systems in the pilots lap. Might he be better of taking a mid-point in a range of values. He might. Might a pilot be better off believing instrument x over instrument y. He might.

Or maybe he might just be better of flying the plane and saying a prayer.

Last edited by MountainBear; 24th Jan 2011 at 02:53.
MountainBear is offline  
Old 24th Jan 2011, 07:38
  #124 (permalink)  
PBL
 
Join Date: Sep 2000
Location: Bielefeld, Germany
Posts: 955
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by alf
Perhaps both fractions require a better understanding of each others viewpoint and capabilities; the resultant educated compromise will benefit safety
Originally Posted by JohnT
Which is why there will always be a role for the certification TP.
Yes, but hisher role is limited, by virtue of the math. Given the complexity of today's designs and the dependence of almost every control-loop data path (in the sense in which I introduced the term) on SW, the prevalent reliability model must be the exponential model used for SW reliability.

Given that model, it is not possible to test statistically, through flight test or indeed veridical simulation, a design's resilience to major, hazardous or catastrophic effects - three out of the four classes of in-flight anomaly. That has to be performed entirely in the head.

That circumstance is what makes the airworthiness assessment fundamentally different nowadays from what it used to be a few decades ago. Or, rather, what should make it different.

PBL
PBL is offline  
Old 25th Jan 2011, 23:50
  #125 (permalink)  
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
Peter, I disagree that the role of the certification pilot is limited by ‘math’.
Modern systems certification involves both man and machine; thus, more than one perspective is required, and neither need dominate.
If the certification is to be done mainly ‘in the head’, then why not use the head of a certification pilot (test pilot, evaluation pilot, and line pilot) who should have the better understanding of the context – the situation in which an anomaly has to evaluated and a judge a plausible crew response. It is the combination of man and machine that has to be resilient.

In essence, this thread asks if modern designs are good enough; but alternativly are humans good enough to operate the human inspired designs?
Furthermore, instead of framing the problem as failures in design or operation, perhaps we should be asking why the certification process (judgement), which promotes safety by regulation of both man and machine, appears to have failed. Has it failed because it is now fundamentally different or because it still needs to change?
alf5071h is offline  
Old 26th Jan 2011, 06:36
  #126 (permalink)  
 
Join Date: Jun 2010
Location: USA
Posts: 245
Likes: 0
Received 0 Likes on 0 Posts
but alternativly are humans good enough to operate the human inspired designs?
Let me rephrase this question slightly....

What are the limitations of a human being in a complex data acquisition environment when he/she has to make judgments in a matter of a few seconds?

What are the limitations of computer software when presented with circumstances beyond design parameters?

To me it is obvious that when (a) the pilot is not the software designer and (b) the software designer is not on the flight deck that a perfect interface between the two is impossible and no amount of training or design can change that reality.

If that conclusion is true, then the next question becomes just how much cash should be thrown at software design and flight crew training and how much should be left up to the proverbial "wing and a prayer."
MountainBear is offline  
Old 26th Jan 2011, 06:55
  #127 (permalink)  
PBL
 
Join Date: Sep 2000
Location: Bielefeld, Germany
Posts: 955
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by alf
Peter, I disagree that the role of the certification pilot is limited by ‘math’.
alf, that may be either because I haven't explained myself well, or because you are not that familiar with the statistical reasoning, or both.

The practical limit of statistical testing of software-based functionality is around one failure/dangerous failure per hundred thousand hours; or 1 in 10^(-5) per ophour. You can bench-test the kit to this level, and maybe perform a certain limited variety of partial-integration tests, but you can't do full integration without flight test.

Keep in mind that the certification standard for DAL A critical kit is 1 in 10^(-9) per op hour, that is, ten thousand times the reliability level of which you can be assured to any reasonable level of confidence by bench testing and flight experience.

If you want to be assured with reasonable confidence that dangerous anomalies will not occur with a probability any more than 1 in 10^(-6) per op hour, it will actually take you the total ophours in the entire service life of the fleet to do so. And you are still, with 1 in 10^(-6), a factor of one thousand under the usual certification requirement for catastrophic events, and a factor of ten under that for hazardous events. That is the combinatorics of software anomalies and there is no way around that math. Recall what you said earlier:
Originally Posted by alf
raw data or otherwise, those at the front of the aircraft are going to use whatever is presented. Thus as you know, a key aspect of certification is that this data must not be hazardously misleading.
and it follows from what I just said that you currently cannot confidently get within a factor of ten of that assurance by general methods. There are some specific methods for specific architectures which promise to be able to attain such assurance with confidence, but these methods are state-of-the-art research (I just reviewed what will be a seminal piece of work on this, which should appear in 2011. Then add the umpteen years it will take for this to become common knowledge.....).

It took ten years of flying Boeing 777's around the world before the critical configuration anomaly showed itself out of Perth in 2005. It took 15 years of flying A330's around the world before the filtering anomaly showed up at Learmonth.

Software-based systems are simply different. The math was put out there by a couple of seminal papers in 1993, and at the turn of the century there were still some supposedly-knowledgeable avionics designers who did not know the hard limitations on testing of software or "proven through experience" supposed-validations. Ten years after that, with Byzantine anomalies on one heavily-used machine that came within days of having its AW certificate revoked, the 2005 Perth incident and Learmonth and similar, avionics engineers and assessors are somewhat more aware of the severe limitations.

I work on critical-digital-system standardisation committees with engineers who were still not precisely aware of the statistical limitations even a couple of years ago, fifteen years after the published results, even though there was a general awareness. However, the situation has recently changed in some countries such as Germany. I can't talk about the work until it is concluded and published, though, because of the protocols involved in standardisation work. It does not cover either avionics or medical equipment - just everything else.

Originally Posted by alf
Modern systems certification involves both man and machine; thus, more than one perspective is required, and neither need dominate.
Unfortunately the math dominates, as the auto industry now knows well. Manufacturers and component suppliers do extensive road testing of all bits of kit, as well as enormously much unit testing and partial-integration testing. But some of that kit really does get 10^8 to 10^10 ophours on it, amazingly, from all the installations throughout the industry. And it fails. And that costs the manufacturers and suppliers huge amounts of money in compensation, which they don't talk about but would dearly like to reduce.

The aviation industry doesn't see that - often - because the number of op hours aren't there.

That doesn't make the role of a certification test pilot any less important than it ever was, as you carefully point out with good reason. But there are some things heshe just can't do.

PBL

Last edited by PBL; 26th Jan 2011 at 07:10.
PBL is offline  
Old 28th Jan 2011, 18:01
  #128 (permalink)  
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
Peter, the statistical explanation does not clarify how a pilot is limited in the overall certification, even though in your view the math dominates.

Considering two recent accidents (A330 AF447 and 737 TK1951), the system problems originated with the sensors where known limitations of software, operating as designed, created operational problems. There was nothing to find in bench testing at whatever level was tested.
The resultant operational problems relate to the human-system interface, the situation, and human behaviour; AFAIK behaviour cannot be modelled adequately by math / bench tests. Thus it is in the human-situation area that a pilot might aid certification.

With respect to the process of certification, the current statistical approach is limited as you describe, yet the industry seeks resilience both in systems and operation to improve safety. Does that imply that resilience cannot be achieved with statistics?
With an enormous caveat of hindsight, in the two accidents, each of the sensor faults had been previously identified and considered by the regulators; the resultant decisions lacked elements of resilience.

For the A330, I assumed that the assessed risk of loss of all airspeed was statistically remote, but this wasn’t proven for the pertaining conditions, just a judgement, but equally there wasn't a total loss of sensed speed. The inadequacy was in the design specification for sensor selection, yet this was statistically acceptable in certification. The operational question is whether this acceptability (with hindsight) was satisfactory for all scenarios – yes it’s OK on a clear day with an experienced crew, but perhaps not at night near Cbs. It is this sort of judgement which a pilot should be able to help with.

The 737 accident IMHO is clear cut – a problem of grandfather rights. Rad alt anomalies were known; new installations either use triple mix or modern dual self-monitoring sensors. This newer 737 just used the old standard, allowed by certification. However, consider which operating standard the certification assumed – what the crew will do, possibly that of the latest ‘state of the art’ system (note the similarities with the MD-80 take-off config warning). Thus there was a gap between what should happen in operation (assumption) and what actually did happen (reality); it is the nature and significance of this gap which cannot be identified by statistics, but pilot input could provide guidance, experience, intuition.

A final point on resiliency is that the concept requires organisations to ‘learn’. In both accidents, the regulators did not learn from preceding incidents. This is a weakness of both the certification process (continued airworthiness) and humans in the process; a weakness perhaps aided by the statistical approach and associated statistical thinking. Thus I would argue for the process to change, there should be a balancing contribution from non-statistical operational judgement.
If not, the industry will have to accept rare accidents such as AF447 – limitations of design and human judgement in certification, and as with TK1951 – limitations of the operating human and the certification process.
I don’t judge which end of the system, design or human, requires change, but point out that there is something in the middle where greater pilot involvement than currently recognised might help make that judgement, preferably before the event.

Last edited by alf5071h; 28th Jan 2011 at 18:12.
alf5071h is offline  
Old 28th Jan 2011, 20:39
  #129 (permalink)  
 
Join Date: Jun 2010
Location: USA
Posts: 245
Likes: 0
Received 0 Likes on 0 Posts
If not, the industry will have to accept rare accidents such as AF447 – limitations of design and human judgement in certification, and as with TK1951 – limitations of the operating human and the certification process.
What's so wrong with industry treating this acceptance as the desired outcome rather than a hinted at tragedy?

Stated in economic terms: at some point in time the marginal utility of the next incremental improvement in safety becomes negative.

I find PBLs instance on the math curious because it's Bayes Theorem that says that when presented with statistically rare events we are better off just ignoring those events than trying to solve for them. I guess when those rare events involve the deaths of many human beings then all of a sudden the math goes right out the window and industry has to look like it's doing something. What it tells me is that underneath all the hardheaded talk about math and volumes of proofs lies a warm and beating heart that is ultimate decision maker.
MountainBear is offline  
Old 30th Jan 2011, 19:23
  #130 (permalink)  
PBL
 
Join Date: Sep 2000
Location: Bielefeld, Germany
Posts: 955
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by MountainBear
I find PBLs instance on the math curious because it's Bayes Theorem that says that when presented with statistically rare events we are better off just ignoring those events than trying to solve for them.
Actually, MB, when testing supposedly-ultra-reliable systems, Bayesian methods say that when presented with statistically rare events, such as a failure behavior of the system under test, we are better off throwing the system away and starting again.

Originally Posted by MountainBear
What it tells me is that underneath all the hardheaded talk about math and volumes of proofs lies a warm and beating heart that is ultimate decision maker.
Fine words. But the certification regulations require a case be presented, and if you are a manufacturer of FBW aircraft you have to persuade the regulators that your critical systems have a failure rate of less than 1 in 10^(-9) op-hours. So someone on the manufacturer's side has to do a bit of math to say "here's the argument" and someone on the regulator's side has to follow that math to be able to say "this is a good/insufficient argument". It's easier with hardware, because the properties of hardware are continuous (something breaks; you make it stronger). But it is devilish hard with software. And, humans in it or not, everything in the control loop(s) of a FBW aircraft goes through large amounts of digitally-programmed behavior. You can't expect humans to debug real-time programs magically as they go wrong, if they go wrong. So they had better be right. And that is where the math comes in.

PBL
PBL is offline  
Old 30th Jan 2011, 20:43
  #131 (permalink)  
 
Join Date: Dec 2006
Location: retirementland
Age: 79
Posts: 769
Likes: 0
Received 0 Likes on 0 Posts
you have to persuade the regulators that your critical systems have a failure rate of less than 1 in 10^(-9) op-hours
Of course reliability is not an attribute of software.
Shell Management is offline  
Old 30th Jan 2011, 22:39
  #132 (permalink)  
 
Join Date: Jun 2010
Location: USA
Posts: 245
Likes: 0
Received 0 Likes on 0 Posts
Bayesian methods say that when presented with statistically rare events, such as a failure behavior of the system under test, we are better off throwing the system away and starting again.
Correct, when viewed from the perspective of the software designer. But while the pilot has the luxury of throwing systems away (flying the plane manually) the pilot doesn't have the luxury of rebuilding complex software systems on the fly. He has to deal with the failure as it is, in a few seconds, with many lives at stake.

You can't expect humans to debug real-time programs magically as they go wrong, if they go wrong. So they had better be right. And that is where the math comes in.
Exactly. And I'm in full agreement with you so long as we understand "right" to be statistically right, that is probabilistic.
MountainBear is offline  
Old 1st Feb 2011, 09:04
  #133 (permalink)  
PBL
 
Join Date: Sep 2000
Location: Bielefeld, Germany
Posts: 955
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by MB
Correct, when viewed from the perspective of the software designer.
Thank you. I'm always pleased to know when I have said something that is right, especially when it is something on which I am expert

Also correct, BTW, when viewed from the perspective of the software user. In this case, the pilots.

Originally Posted by MB
But while the pilot has the luxury of throwing systems away (flying the plane manually)
You cannot fly most modern commercial transport aircraft "manually". Everything a pilot sees and does, from "raw data" to control responses, is part of a control system loop which goes through numbers of programmable-electronic systems. (I do acknowledge that on Boeing 737 aircraft, some of the control loops are still analogue mechanical systems. I doubt that will last another twenty years.) Anything a pilot wants to see or do rests on the reliable behavior of those programmable-electronic systems.

Maybe some just have to see it to believe it. We draw causal control-flow diagrams of airplane systems in which the pilot is part of the control loop. It is a valuable analytical technique which we occasionally try to teach to others, but for the most part remain best at ourselves. For most control parameters (or what one might think of as such), these graphs have twenty to forty elements, of which at most three are "pilot see", "pilot think", "pilot do" and the great proportion of the rest are programmable-electronic.

Now, all of those programmable-electronic elements are subject to the statistical phenomena about which I have been talking. If you think that an anomalous condition can most always be saved by those three nodes containing the word "pilot" above, then I can only admire your faith in the ability of software engineers to write perfect multi-ten-thousand-to-million-line programs. I can also say that few in the industry share that faith, although some do profess it publically on behalf of their employers.

PBL
PBL is offline  
Old 1st Feb 2011, 23:57
  #134 (permalink)  
 
Join Date: Jun 2010
Location: USA
Posts: 245
Likes: 0
Received 0 Likes on 0 Posts
Also correct, BTW, when viewed from the perspective of the software user. In this case, the pilots.
I admit I'm baffled.

In a prior post you said this:

You can't expect humans to debug real-time programs magically as they go wrong, if they go wrong.
The reason that Bayes Theorem implies a different course of action for software designers as opposed to pilots is because the factual situation changes. Software designs have the luxury of rebuilding the system; pilots don't.

I agreed with your slight redefinition of my original comment because I thought we were tying to say the same thing only using slightly different words. Now I wonder if you are just being argumentative.
MountainBear is offline  
Old 2nd Feb 2011, 07:46
  #135 (permalink)  
Per Ardua ad Astraeus
Thread Starter
 
Join Date: Mar 2000
Location: UK
Posts: 18,579
Likes: 0
Received 0 Likes on 0 Posts
Now I wonder if you are just being argumentative.
- I have often thought that too, but on balance I don't think PBL understands what we mean by 'raw data'. To me (as a pilot) this means that although the data has passed through many ICs and the like it is essentially the 'truth' and not some software programmer's interpretation of what he/she THINKS I should be seeing and in the case of control functions it should be what I ask of the system. My training should then govern what I ask.

I am now old and 'retired' but I grew up in a world where I could stall an aircraft if I wished, exceed the g limitation where necessary to avoid dying, choose which of 3 differing inputs I wished to accept and expect my control surfaces to do what I actually ask. It now appears that these choices are being removed, and while there is no logical statistical argument for 1 and 2 in the civil world, 3 is vital and should not be delegated to some programme with some 'acceptable' level of error and 4 is 'ideal'. I am, however, delighted to be given 'information' on what HAL thinks is wrong, but I don't want him interfering, Dave.

The problem comes (in line with my thread) when pilot training and ability becomes so degraded to make pilots 'system operators' only, when all the 'interferences' above become essential and we route inexorably towards the Airbus Captain and dog world.
BOAC is offline  
Old 2nd Feb 2011, 20:46
  #136 (permalink)  
PBL
 
Join Date: Sep 2000
Location: Bielefeld, Germany
Posts: 955
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by MB
Now I wonder if you are just being argumentative.
Originally Posted by BOAC
- I have often thought that too, but on balance I don't think PBL understands what we mean by 'raw data'.
Gentlemen, please don't let's be tempted to slip into gratuitous insults merely because we don't understand the relevant engineering! The title of this thread is Computers..and..Safety of Aviation, a matter on which I happen to be expert. If that's what you want to discuss, fine. If not, may I suggest you just let it be?

PBL
PBL is offline  
Old 2nd Feb 2011, 21:30
  #137 (permalink)  
Per Ardua ad Astraeus
Thread Starter
 
Join Date: Mar 2000
Location: UK
Posts: 18,579
Likes: 0
Received 0 Likes on 0 Posts
I'm struggling to see
I don't think PBL understands what we mean by 'raw data'.
as a gratuitous insult. It is simply a statement of opinion based on observation.

As an (expert) 'end user' I (and others) happen to find 'raw data' a major factor in the 'Safety of Aviation'
BOAC is offline  
Old 3rd Feb 2011, 12:18
  #138 (permalink)  
PBL
 
Join Date: Sep 2000
Location: Bielefeld, Germany
Posts: 955
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by BOAC
I'm struggling to see [my comment]
as a gratuitous insult. It is simply a statement of opinion based on observation.
Well, I don't believe that. I think you're just trying to needle.

Originally Posted by BOAC
As an (expert) 'end user' I (and others) happen to find 'raw data' a major factor in the 'Safety of Aviation'
Another statement of opinion, I suppose. Let's see if this one is any better. Can you name any accidents in which a crew's inability to fly on "raw data" was a factor? (Note this is a very specific question.)

MB is puzzled about the use of Bayesian techniques in the evaluation of (supposedly-) ultrareliable systems. His response to the comments I am offering is to imagine I am being argumentative. I get enough of that kind of banter from the people I live with, the cats and the ex-ladyfriends. Can we get back to pretending we are professional people with certain sorts of expertise having a technical discussion?

Originally Posted by BOAC
To me (as a pilot) ["raw data"] means that although the data has passed through many ICs and the like it is essentially the 'truth' and not some software programmer's interpretation of what he/she THINKS I should be seeing
Let me say it again, just in case it wasn't clear enough the first time around. According to this explicit definition, very few airline pilots on modern kit see any "raw data".

Now, let me turn to querying the definition of raw data. RAs fall over every so often. They don't appear to use BITE (or, not effectively) and standard fault-tolerance methods don't appear to be used with multiple RAs in certain kit (say, Turkish Airlines's Boeing 737NGs). The generic failure rate is, I guess, somewhere between 1 in 10^(-4) and 1 in 10^(-5) per op-hour. According to the definition above, a Turkish Airlines RA on approach to AMS a couple years ago ceased providing true data rather abruptly. So, according to the definition above, the question arises: when you are looking at a "raw-data"-delivering instrument, such as a VSI, ASI, or altimeter, how do you know you are getting "raw data"?

I actually think the definition above is wrong. And I think it can be partially fixed with a little thought. And I think that, when you try to fix it, you will maybe get some initial inkling of the problems associated with reliable data-paths. All that is then needed to make my general point about modern kit is to interpose a couple of computers.

PBL

Last edited by PBL; 3rd Feb 2011 at 12:42.
PBL is offline  
Old 3rd Feb 2011, 12:43
  #139 (permalink)  
Per Ardua ad Astraeus
Thread Starter
 
Join Date: Mar 2000
Location: UK
Posts: 18,579
Likes: 0
Received 0 Likes on 0 Posts
I think you're just trying to needle.
- you are wrong - simple statement of fact. I guess that is a foreign concept to you.
Can we get back to pretending we are professional people with certain sorts of expertise having a technical discussion?
- if that was for me,
I regret not. I find it too tiring. I think you cannot contribute anything to my knowledge, understanding of aviation or enjoyment of life. I now have to place you on my ignore list but wish you a happy and glorious career with your theoretical work, lady-friends and cats..
BOAC is offline  
Old 3rd Feb 2011, 12:55
  #140 (permalink)  
PBL
 
Join Date: Sep 2000
Location: Bielefeld, Germany
Posts: 955
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by BOAC
I think you cannot contribute anything to my knowledge, understanding of aviation or enjoyment of life.
Quite obviously not. But you can't fault me for not trying.

Originally Posted by BOAC
I now have to place you on my ignore list
I don't know whether to be mortified or relieved!

PBL
PBL is offline  


Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.