Go Back  PPRuNe Forums > Flight Deck Forums > Tech Log
Reload this Page >

Automation dependency stripped of political correctness.

Wikiposts
Search
Tech Log The very best in practical technical discussion on the web

Automation dependency stripped of political correctness.

Thread Tools
 
Search this Thread
 
Old 22nd Jan 2016, 08:39
  #161 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by Capn Bloggs
Because nobody gives two hoots about extracting a few hundred more kilos out of a takeoff...

or maybe it is technically very difficult and/or the regulators are unable to work out how it will be reliable/safe enough and/or the costs involved outweigh the benefits?
Bloggs, you misunderstand what I'm referring to.

I am talking about fitting a simple almost stand-alone piece of kit that does the F=MA equation and tells you if you have entered incorrect aircraft mass.

Easy to do and would have saved many accidents and many many near misses.


Originally Posted by Capn Bloggs

Yep, and guess what, the boffins/RS test pilots have been proved wrong. The first skill you need is be able to fly. Can't fly? You'll die. Worry about the pressure of a gearbox later.

I have also obviously not explained myself clearly with reference to Airbus dumbing down training.

I was specifically referring to tech training, not piloting skills. I think everybody including the manufacturers have accepted the shortfalls in pilot training.
Tourist is offline  
Old 22nd Jan 2016, 08:59
  #162 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by riff_raff
If you think about it, even a fully autonomous aircraft still relies on a control system that functions based on a set of instructions created by a group of humans that designed the controls using their own best judgement on how to deal with any particular situation they thought the aircraft might encounter.
This is not strictly true with a neural net, and even in the areas where that comment may be valid, remember that the humans making that plan have the benefit of time and a testing process. That is exactly why we have ECAM and QRA in the first place. They may not be perfect but they are a darn sight better than working on the fly.

Originally Posted by riff_raff
This approach works very well in most cases, but there are the rare situations where a skilled pilot with the ability to make split-second decisions can do a better job.
Human are not good at split second decisions.
That is what computers are good at.
Humans are good at events that are entirely new and unforeseen. Black Swan events.
That is what computers are currently bad at.

How many accidents do you think are Black Swan and how many are the same old accident again and again....

Sully's adventure is usually quoted about now.
His example is actually exactly what an autonomous aircraft would be good at.
An aircraft will obviously know exactly where it is (GPS/INS) and it's energy level. Those, combined with a simple performance data set will give it it's glide range.
It now knows if it can make the nearest airfield.
Not guesses, not wonders.
It knows.
If it can, it goes to the field. (I have seen rumours that it may have been able to. I don't know)
If not, then it makes the same decisions that a human would.
Is there a river/lake in the database?
It will do that near instantaneously whilst putting out a mayday informing exactly where it is going to land, completing the ditching drills, briefing the cabin crew, telling the passengers to brace etc etc.

None of that is technologically too challenging.
There even exist flying today on an optionally manned Blackhawk a system which if the river was not there will scan the terrain for the least worst forced landing area.


Originally Posted by riff_raff
When it comes to ensuring the safety of commercial airline passengers, the added cost of having two qualified pilots in the cockpit is money well spent.
Which is it?
Is an autonomous system to expensive to produce as some would have it, or an attempt to save money by binning pilots?



I think it is not unreasonable to make this statement.


Every automation brought into airline cockpits so far have led to the reduction in reliance on pilots, and an improvement in safety.

What makes you think that further steps towards the logical conclusion will be any different?
Tourist is offline  
Old 22nd Jan 2016, 09:04
  #163 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by Goldenrivett
Hi Tourist,
I think even Airbus recognises that ECAM doesn't provide all the answers.
Airbus is not an autonomous aircraft.
It is designed to have a human in the loop
That does not mean Airbus could not make an aircraft that was.

Referring to Airbus aircraft as proof that we will always need pilots is fallacious since they were designed to need pilots.

Originally Posted by Goldenrivett
It's a pity crews like yourself feel you can't use your technical knowledge and experience to anticipate what ECAM will tell you to do later. e.g. Cabin ALT shows rate of climb & outflow valve fully closed = Suspected door seal leak. You can't prevent cabin ALT from climbing - but no ECAM yet.

Do you sit there and wait for ECAM "EXCESS CAB ALT" to tell you what to do or do you anticipate the problem and initiate a precautionary descent?
No.
I am very happy to use my tech knowledge.
As I said above, Airbus is not designed to do everything itself. It is 60s/70s tech. So it doesn't.

Last edited by Tourist; 22nd Jan 2016 at 09:15.
Tourist is offline  
Old 22nd Jan 2016, 10:30
  #164 (permalink)  
 
Join Date: Jan 2014
Location: N5109.2W10.5
Posts: 720
Likes: 0
Received 0 Likes on 0 Posts
Hi Tourist,

You must be really confused. You said in post #158
"By contrast, the tech knowledge required for the Airbus CBT is more like "the ladybird book of planes"

The fact that Airbus manage to be incredibly safe under these circumstances make me think that it is deliberate policy and not degradation of standards that has led to this.

The manufacturers believe that knowledge will lead to thinking instead of performing like automatons in the event of abnormal events. (They may well be correct that modern aircraft are just too complex for us to understand enough for us to make good decisions under pressure.)
They would rather you just did what the ECAM says, in effect performing as the avatar for the computer.

Under these circumstances humans are really just error vectors.
Unable to help situations, but able to hinder.

I say remove the vector."

Then you say,
"Referring to Airbus aircraft as proof that we will always need pilots is fallacious since they were designed to need pilots."

So are they designed to need pilots?
Do you want to remove the "vector"?
Make up your mind.
Goldenrivett is offline  
Old 22nd Jan 2016, 10:36
  #165 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
Goldenrivet

I think/hope you may be the only person that is under the impression that I'm advocating removing humans from current aircraft...
Tourist is offline  
Old 22nd Jan 2016, 11:16
  #166 (permalink)  
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
Re the current discussion my attention was drawn to the NASA report ’The Analysis of the Contribution of Human Factors to the In-flight Loss of Control Accidents’ *.
An initial reading heightened my inherent bias, the word ‘error’ (what is it, a cause, action, or consequence) and the use of HFACS (categorising, boxed-in thinking, the risk of relating human variability to numbers); then there were Bayesian Belief Networks … !!

However, having struggled to the end, there was acknowledgment of the assumptions and restrictions in the data base, and the limitations of the research in that this was only a model or a process of modelling.
An alleviating concluding statement triggered a re-reading the report. “The analysis of the historical data showed that the deficiencies at the airlines’ organizational levels are often the underlying cause of flight and maintenance crew related accidents. Consequently, the authors developed a high-level airline organizational hierarchy to trace and identify the deficiency propagation”.

Back to Fig 1. which shows the error paths (vectors?) and combinations of contributors, noting that one is a direct vector by-passing the human. Also, that the data relates to the number of accidents, and not the previously discussed number of fatalities.
Idling some numbers, it is interesting that the total percentages in the HE – LOC path totals 77.8%, which is adjacent to the oft misquoted 80% human contribution in all accidents. Is this a model of the real world, or just a model of our perception of the real world?

Considering an ‘automatons’ view, then the “80%” HE would have to be blocked, yet 5.5% of that origionated from ground personnel, and not to forget the previous system related direct path (SC-LOC).
A personal experience of LOC (amongst others, intended and not so) involved the non-existent direct path ENV-LOC; the aircraft was restored to stable flight (LOC-HE-ENV). This represents the reverse, recovery path - the successes of human involvement; in a logic diagram this could involve negative HE - the argument for automation, or alternatively positive human behaviour (a negative vector) – cf concepts that error and success have the same cognitive root.
Automation would have to consider all of the reverse paths in Fig 1, yet there is little or no data which identifies the mechanism of the potentially large number of ‘hidden’ (unreported) successes.

Without identifying this mechanism how might we be sure that automation can replace the human.
Also, because this line of argument is based on a model (computation / automation) it is unlikely that we can ever provide any assurance of sucess; yet the pro automation argument is based on similar (the same model), and that any implemented solution will also use similar computation / automation technology.

Pulling hard on boot laces?

* The direct link my not work:- try NASA Technical Reports Server (NTRS) - The Analysis of the Contribution of Human Factors to the In-Flight Loss of Control Accidents


A retired sceptical and biased pilot who spent 25 yrs developing and testing automation, then 10yrs investigating accidents to understand weaknesses in the auto – human interface, and the 40 yrs of accumulated personal ‘error’ in aviation.
alf5071h is offline  
Old 22nd Jan 2016, 11:57
  #167 (permalink)  
Thread Starter
 
Join Date: Jun 2000
Location: Australia
Posts: 4,188
Likes: 0
Received 14 Likes on 5 Posts
I think everybody including the manufacturers have accepted the shortfalls in pilot training.
But have avoided the issue. From where I stand effective action is negligible. Lip service only. Like add one more raw data ILS per year to a sim session plus maybe a manually flown ILS in good weather and no crosswind of course. Fat lot of good that does to address the automation dependency problem.
Centaurus is offline  
Old 22nd Jan 2016, 12:56
  #168 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
Yes, I totally agree.

Unfortunately, to bring current pilots back up to the skill level of 30 yrs ago simple cannot be done.

Even if the initial training standard could be replicated, the thing that kept everybody competent in the old days was the constant need to actually do the job.

Now that the only time we do the whole job is during the incredibly rare malfunctions, we cannot hope to remain at our best.
Tourist is offline  
Old 22nd Jan 2016, 13:07
  #169 (permalink)  
 
Join Date: Jun 2000
Location: last time I looked I was still here.
Posts: 4,507
Likes: 0
Received 0 Likes on 0 Posts
Even if the initial training standard could be replicated, the thing that kept everybody competent in the old days was the constant need to actually do the job.

Now that the only time we do the whole job is during the incredibly rare malfunctions,


I think much of the discussion about competence is more about being able to do the basics; i.e. normal ops, not just those under non-normal (malfunction) circumstances. This concern is motivated by incompetent pilots crunching serviceable a/c. while trying to achieve the normal/basic.
RAT 5 is offline  
Old 22nd Jan 2016, 13:39
  #170 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by RAT 5
[I]

I think much of the discussion about competence is more about being able to do the basics; i.e. normal ops, not just those under non-normal (malfunction) circumstances. This concern is motivated by incompetent pilots crunching serviceable a/c. while trying to achieve the normal/basic.
Ok, I must admit that I generally had no concerns about pilots ability to fly the basic script in my airline with all the toys working.

It was in the Sim with the toys removed where I would see astonishing/horrifying things.

In that list of horrors I would include the degradation of my own basic skills/capacity despite my efforts to hand fly as much as possible on the line.
Tourist is offline  
Old 22nd Jan 2016, 22:37
  #171 (permalink)  
 
Join Date: Jan 2007
Location: Vermont
Age: 67
Posts: 200
Likes: 0
Received 0 Likes on 0 Posts
What is a fact is that Boeing FCOMs in last 25 years, ever since EFIS FMC a/c were introduced, now contain less & less technical information than previous generation a/c.
the notion that the aircraft commander IS the legal commander responsible for the safety of the flight, and in the end is the sole decision-maker on board the aircraft is gradually being made subservient to the audit process where such authority is "modified"
To expand a bit on my earlier post, these two observations actually form the important pattern.

Soon after WWII, Herbert Simon, who later was an early developer of artificial intelligence, said the following, one of my favorite quotes:
“Two persons, given the same skills, the same objectives and values, the same knowledge and information, can rationally decide only upon the same course of action. Hence, administrative theory must be interested in the factors that will determine with what skills, values, and knowledge the organization member undertakes his work.”
This idea, the suggestion that the human mind is simply a binary computer, is at the root of observations like

The manufacturers believe that knowledge will lead to thinking instead of performing like automatons in the event of abnormal events.
This is not necessarily a safety strategy so much as it is a management style. The only way a human being will fit into a flow chart is if you convince yourself that Simon’s idea works. (To be fair to Simon, he was a remarkable polymath who evolved considerably and made substantial contributions to many areas.) Thus, as a manager, you are primarily concerned with “what skills, values, and knowledge the organization member undertakes his work.” Naturally, it is easier to specify the course of action that you would like your human to rationally decide upon if there are fewer skills, simpler values, and less knowledge.

This exists in areas much broader than aircraft systems knowledge. We have new hire first officers straight out of the military who have no idea what an Operations Specification is. How would they know? The company doesn’t issue them to the pilots. They didn’t have OpSpecs in the military, and no ATP course here in the States would ever go so far as to explain how OpSpecs work and why they matter. The company certainly doesn’t want to spend any time on this; the instructors themselves don’t have the OpSpecs. The same ex-military pilot may not know that there is an FAR that says you can’t take off if you know you will arrive at the destination over the max landing weight…nobody ever provided any training in what the FARs say. (Not an indictment of ex-military pilots; many, particularly the former KC-135 guys, hand fly quite well for some reason…)

Reading Langewiesche’s piece on AF447, the most poignant passages are the CVR transcriptions detailing the frustration and near panic felt by the subordinate pilots when the captain did not immediately respond to their calls. Assuming that the translation is accurate (I always worry that meaning is lost in these endeavors), it is clear that neither pilot believed he had the technical skill necessary to resolve the situation. Moreover, they somehow believed that the captain, by virtue of his greater experience, would have that technical knowledge.

Thus, in the stress of the moment, they revealed an inner perception of themselves as inadequate by virtue of inexperience. The “skills, values and knowledge” that they possessed were not up to the requisite course of action. They knew it, and they were clawing their fingernails raw trying to get at that knowledge. How on earth did they graduate from a technical program that awarded them a type rating without the confidence they needed at that moment?

The problem, as I have said in an earlier post, is that all of the management theory generated by Taylor, Simon and others operates within a linear mathematical paradigm. It uses the same understanding of cause-and-effect that babies use when predicting the motions of billiard balls. They probably could not have done otherwise; chaos theory and the understanding of nonlinear behaviors did not exist until the 1970’s for all practical purposes. However, in the cockpit, we actually operate in an environment that frequently exhibits nonlinear behavior. We always have. Turbulent flow off a wing is nonlinear. Weather is nonlinear. Even the function of neurons in the brain is nonlinear. Complex systems, which we already had in aviation and then expanded exponentially with automation, exhibit emergent behavior at the least, in which the system output is something more or less than the sum or product of its components.

Standard operating procedures, when well designed, function to protect margins of safety. The margins exist to provide resilience and can absorb nonlinear effects. There are two ways to comply with SOP. In one, you simply do what you are told. In the other, you understand the margins that you are protecting, understand how that SOP accomplishes that, and you comply intelligently, as an act of executing your authority as well as an act of mastery over the aircraft. In the former approach, you become fearful of noncompliance, and pull the nose up as soon as the nose drops regardless of why. In the latter approach to SOP, you preserve the protections and error traps built into SOP while being much less likely to follow the book into the ground.

Linear management theories don’t see the difference.

Nonlinear behaviors are why I am not terribly worried about autonomous airliners in my lifetime. Eventually, sure…but not for quite a while. But much, much more important is how we tailor our profession to meet our obligation as the final authority as to the operation of the aircraft, in an increasingly complex system managed by people who actually think that “Two persons, given the same skills, the same objectives and values, the same knowledge and information, can rationally decide only upon the same course of action…”
Mansfield is offline  
Old 23rd Jan 2016, 01:51
  #172 (permalink)  
 
Join Date: Jan 2005
Location: W of 30W
Posts: 1,916
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by tourist
Referring to Airbus aircraft as proof that we will always need pilots is fallacious since they were designed to need pilots.
They were designed to protect against pilot's mistakes ... but by now pilots have procedures to protect against "protections" that were supposed to save them ...
CONF iture is offline  
Old 23rd Jan 2016, 04:55
  #173 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
To be fair, the protections built into Airbus to protect against pilot error do work quite well when serviceable.

Which protections are you referring to that require protecting from?
Tourist is offline  
Old 23rd Jan 2016, 05:35
  #174 (permalink)  
 
Join Date: Oct 2015
Location: Tranquility Base
Posts: 77
Likes: 0
Received 0 Likes on 0 Posts
#126:
I'm concerned about automatic dependence diluting manual flying skills, but I'm more concerned about automatic dependency causing a dilution of airmanship
That is very well put! It needs both, but while the manual flying skills do not need to be perfect (just don't leave your flight envelope, and if you approach the edges of the envelope, stear it back into the middle of the envelope), sound judgement and decision making with good situational awareness is overall the most important.

If we think about AF447 or the Air Asia thing, it was a problem of basic flying, but not in the sense of how precisely can I fly it, but in the sense, how should my plane be flown ROUGHLY right now. You could even argue it was not about what to do, but about what surely NOT to do (pull on the stick).

This area of reasoning comes in very well with #117
This has led me to suspect that we have a strong “compliance” reflex. The first response is aimed at returning to compliance.
To be able to decide properly, when compliance is not anymore the main goal is one of the craftsman qualities a pilot should possess. We do not want too much avoidance of necessary compliance either, compliance is generally a very high contributor to safety, but when becomes the focus on compliance a burden in a specific situation?

The question is then, how can we train, build and maintain sound judgement, decision making skills, situational awareness and airmanship? I think it is a question of culture and training. You need a culture where captains show these qualities to their first officers, and over the time a common understanding developes, what is appropriate and what is not. You need SOP's who support that kind of thinking, and do not hinder it. You need regular training of crews, in simulator and in publications where such decisions are discussed. You need safety publications to sharpen the organisations mutual understanding of what is good airmanship.

And besides that, you still need the basic flying skills with pitch and power, and a good instrument scan, which comes from having a sensible culture and set of SOP for switching off the automatics.

I would disagree that it is problematic that the FCOM have less technical information then they used to have. Systems have become more complex, and as I pilot I do not need to understand them to the deepest technical level. However what I need is an understanding of what the systems are intended to do, how they are supposed to interact with each other and the environment, to make best use of them. And most importantly, I need to be able to recognise a failed system and to judge what I have to do now with the aircraft (what to do with the system comes later).
1201alarm is offline  
Old 23rd Jan 2016, 05:47
  #175 (permalink)  
 
Join Date: Oct 2015
Location: Tranquility Base
Posts: 77
Likes: 0
Received 0 Likes on 0 Posts
#168
Automation would have to consider all of the reverse paths in Fig 1, yet there is little or no data which identifies the mechanism of the potentially large number of ‘hidden’ (unreported) successes.
Interesting post althogether. Rephrased in non-scientific terms it means that we have no statistics about how often the humans actually saved the day.

This does not only apply to abnormal situations, but all the daily decisions pilots do with regard to passenger and weather irregularities, diversion decisions, etc. Every experienced commercial pilot will know what I mean. There are everyday little situations and surprises.

Which bings me to #173
Nonlinear behaviors are why I am not terribly worried about autonomous airliners in my lifetime.
Personally, I believe a fully autonomous airliner would be way too much non-linear, because every level of interaction between nowadays independant systems would lead to enourmous complexity and non-linear behaviour. It won't be able to provide 1 on 10 million.
1201alarm is offline  
Old 23rd Jan 2016, 13:37
  #176 (permalink)  
 
Join Date: Jan 2005
Location: W of 30W
Posts: 1,916
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by Tourist
To be fair, the protections built into Airbus to protect against pilot error do work quite well when serviceable.
They were serviceable ... but data pollution along the chain in made them trigger when inappropriate. Pilots are now requested to kill them before they kill ...
AoA protection is a regular contender.
CONF iture is offline  
Old 23rd Jan 2016, 13:58
  #177 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by CONF iture
They were serviceable ... but data pollution along the chain in made them trigger when inappropriate. Pilots are now requested to kill them before they kill ...
AoA protection is a regular contender.
You've lost me on that one. Something new?

Please explain further.
Tourist is offline  
Old 24th Jan 2016, 03:11
  #178 (permalink)  
 
Join Date: Jan 2005
Location: W of 30W
Posts: 1,916
Likes: 0
Received 0 Likes on 0 Posts
The new thing is we finally got an Operation Engineering Bulletin how to force the airplane out of Normal Law when protections unduly activate.
Up to that time the airplane was here to save the pilots - To acknowledge that some souls on board could save the airplane from its own system was simply not part of the Airbus line of thought ...

A first known occurrence with Qantas in 2008, then Eva Air in 2012, and was it last year with Lufthansa ...
Switch 2 ADRs OFF to regain control.
CONF iture is offline  
Old 24th Jan 2016, 11:25
  #179 (permalink)  
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
1202, “… that we have no statistics about how often the humans actually saved the day.”
Yes, but more than that, we have little understanding of the mechanisms behind any statistics.

In Fig 1, the negative HE-LOC vector involves %, but in order to improve safety we need to understand how these adverse outcomes came about.
One simplified view considers incorrect situation assessment or incorrect choice of action (Orasanu), thus turning this around, the successes might represent appropriate assessment or choice of action. Alternative views consider that the behaviour in adverse events and success has the same basis and thus the successes should be fully investigated (Hollnagel).

In some successes the initial assessment/action was not as required, e.g. of the 20+ ICI/ADC events pre AF447, several aircraft pitched up, but subsequent action (adjustment) prevented a stall. Another view involves a continuous process of adjustment – reviewing awareness and action based on an interim outcome.
A further aspect of success involves the really hidden events, e.g. where an unmodified aircraft faced the ICI situational threat, but the crew managed the situation (adjusted behaviour) to avoid an unwanted outcome; normal operation, non- event. (Weick, et al – ‘safety is a dynamic non-event’).
Successful crews / operators appear to be able to manage ‘potential accident scenarios’; not just avoid the fatal accidents, but also all events which could have adverse outcomes, yet the solution need not be ‘machine’ (or SOP) perfect, only acceptable for the situation (machine ~ technology / automation).
This may involve (situation) recognition primed / naturalistic decision making, the basis which may not be completely feasible with machine based decision making. A machine might provide better situation assessment, but not for the choice of action, which may depend on learning. This assumes that machine learning is based on previous situations, whereas human learning enables previously experienced situations to be extended to other un-sampled situations (intuition?), thus for machines to have sufficiently reliable ‘intuition’, the boundaries of this process might have to be programmed by the fallible human.

As for safety improvements, Orasanu considers machine aided awareness, but also complementary crew training to improve experience and judgement - airmanship.
For choice of action, a machine may help in judging risk, but such a judgement would require an understanding of both the situation and the proposed action – what has the human decided to do. i.e. machines may be better at catching an ‘error’ than making the decision, e.g. EGPWS.

Orasanu. http://www.dcs.gla.ac.uk/~johnson/pa...ithlynne-p.pdf

Hollnagel. https://www.scribd.com/doc/296474809...hat-Goes-Wrong

Weick. Managing the Unexpected - University of Michigan Business School

Klein. http://xstar.ihmc.us/research/projec...ensemaking.pdf
And http://xstar.ihmc.us/research/projec...semaking.2.pdf
And http://psych.colorado.edu/~vanboven/..._expertise.pdf

Other refs: http://high-reliability.org/Critical...Johns_2010.pdf

Error management in aviation training
alf5071h is offline  
Old 24th Jan 2016, 14:23
  #180 (permalink)  
 
Join Date: Jan 2001
Location: Home
Posts: 3,399
Likes: 0
Received 0 Likes on 0 Posts
alf

Could you do me a favour and dumb down your posts slightly.

Speaking for myself I find them very interesting but sometimes a little opaque as if there is required pre-reading to understand some of the points in them.
Tourist is offline  


Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.