PPRuNe Forums

PPRuNe Forums (https://www.pprune.org/)
-   Rumours & News (https://www.pprune.org/rumours-news-13/)
-   -   Airbus pitches pilotless jets -- at Le Bourget (https://www.pprune.org/rumours-news/622618-airbus-pitches-pilotless-jets-le-bourget.html)

Gove N.T. 19th Jun 2019 07:39


Originally Posted by CargoOne (Post 10496924)


you must be the one who is avoiding LGW and FRA inter-terminal trains?

perhaps include the Docklands Light Railway in London which carries more than 119million people a year over a 24 mile network and has an on time departure reliability rate of 99% with no driver, just a computer. Yes it has a fixed trajectory which reduces the chance of straying. This is the future of all rail travel. Yes I know it’s a flying forum but planes now won’t work safely without computers - properly programmed of course

LOONRAT 19th Jun 2019 07:42

THE FUTURE ? I HOPE NOT
 
'Welcome to the first computer flown transatlantic crossing. Let me assure you that although we have no pilots aboard our triple computer system and advanced programing guarantees you will have a safe and pleasant flight to New York. Be assured that nothing can go Wrong Go Wrong Go Wrong Go Wrong Go Wrong >>>>>>>>>>'

fergusd 19th Jun 2019 07:47


Originally Posted by Herod (Post 10496755)
Errr...on what do you base that? A pilot is thinking and acting (or at least monitoring) in 4 dimensions (the conventional 3 plus time). A car driver is only in 2. In fact, if you accept that a car is following a road, he is only in 1; straight ahead.

Your view is highly over simplistic. Next time you get in a car lock the steering straight ahead and accelerate to 70mph with your eyes shut, see how far you get . . .

Aircraft operate in a highly controlled environment (cars do not), the control system required to fly a plane does not require complex decision making logic which requires non deterministic deep learning to be used (cars do - there is no other approach (today) which allows a car to be driven on current roads), it's flight behaviour is mathematically deterministic (a car is not - the deep learning part is non deterministic and 'difficult' from a safety perspective). From a software and safety perspective, aircraft are much simpler to fly than driving a car (assuming the car is being driven in any kind of realistic scenareo).

It is a very commonly held view (in the safety, systems and software communities) that full automation of aircraft and trains is far more practical than that of cars.

Airbus, unsusprisingly, concur.

parabellum 19th Jun 2019 07:55


Of course absolute perfection will probably never be achieved but pilotless aircrafts will happen, and I'd wager to say sooner than most expect.
It won't happen until the world's insurance markets agree to let it happen, when they feel it is an acceptable risk, especially the third party element, (pilotless aircraft involved in a mid-air collision over the CBD of a major city). The leaders in the aviation insurance market notch up a lot of air miles and will have definite opinions on this subject.
Before the skies become full of pilotless aircraft, dependent on ground control all threat from sophisticated terrorism will have to have been eliminated.
Don't hold your breath.

KiloB 19th Jun 2019 08:01

I wish people would stop using ground-based examples of automated control to justify systems for aircraft.
There is one BIG difference. Just about any ground based control system faced with data ‘outside parameters’ will contain Code to ‘halt and call for assistance’. Difficult to do when you are flying!
And for all those second guessing Sully; remember a big part of the decision making was that ‘even with a 90% chance of a successful turn back, there was a 10% chance of a dive into a heavily populated area’. How do Computers/ Programmers deal with that?

George Glass 19th Jun 2019 08:14

Yeah right. People’s enthusiasm for this fairy land stuff is in inverse proportion to their operational experience. Physically handling the aircraft is about 5% of what an airline pilot does. I wonder if an enthusiastic PhD student has ever followed a domestic crew on , say, a 4 sector per day, 4 day trip and added up the critical decisions made from sign-on to sign-off. It would be an interesting exercise. Just boarding, getting the doors closed, pushing back and taxiing to the hold point would be beyond automation. Ain’t gunna happen.

DaveReidUK 19th Jun 2019 08:18


Originally Posted by KiloB (Post 10497403)
And for all those second guessing Sully; remember a big part of the decision making was that ‘even with a 90% chance of a successful turn back, there was a 10% chance of a dive into a heavily populated area’. How do Computers/ Programmers deal with that?

Quite so.

It's not hard to envisage that a computer flying US1549 might well have decided that diving into the Hudson was the least worst option (confining any deaths or injuries to those on board), compared to the carnage that would likely ensue if the flight came down in the middle of a New York borough.

I'd rather have the guy(s) up front making that decision, rather than some anonymous programmer who's having a bad day at his/her keyboard.

CargoOne 19th Jun 2019 08:24


Originally Posted by KiloB (Post 10497403)

And for all those second guessing Sully; remember a big part of the decision making was that ‘even with a 90% chance of a successful turn back, there was a 10% chance of a dive into a heavily populated area’. How do Computers/ Programmers deal with that?

This is very easy and you don’t need any AI for that. A processing power equivalent to iPhone with pre-programmed aircraft performance and other relevant data input will make a decision in a split of a second with no element of guessing, as it will analyse about a few thousand possible scenarios and choose the best one, this is where computers are light years ahead of people.

Flocks 19th Jun 2019 08:30

When I see all I have to solve on the ground as captain with my FO (oh I m so glad we are 2, to respect our short turnaround of today) and I just speak about ground stuff, pax problems, bag loader, cabin crew request, ... I m always thinking, the flying part is the easy part ... So if a full automated plane come, they need to change all the way to operate this plane, from the gate agent to the ground staff and ATC ... Thinking those plane will have to operate same time as the good old fashioned 2 crews one, I m wondering how they want to do this transition.

Oh, ofc, they will have a single crew plane, but one small problems I believe is : it works when you have a single pilot with past experience ... Few years later, how it will work 200hr training and you are captain to take all the decisions (I speak again about ground) it took me so long to see other doing it before I was myself able to do it ...

We all ve been promising autonomous cars ... Still waiting ... And even Tesla admitted it is more difficult than they initially thought ...
Airbus and other are telling us how we will fly autonomous flying bus in the city ... Same, it is really good to speak about that and try to develop technology, but public already don't accept a low altitude helicopter over city, noise and other ... So yo imagine even hundred of those electric autonomous flying all over London or newyork ... I take the bet it will not arrive anywhere soon (not with the schedule market people communicate to us ! )

Same as the hyperloop project, if I listen advertisement, in few years we will be able to travel so fast it will be amazing ... Same I take the bet it won't be that fast to happen ...

Fly safe.

futurama 19th Jun 2019 08:50


Originally Posted by CurtainTwitcher (Post 10497376)
What exactly are they good at? The leading type of AI, neural nets are exceptional of taking a closed problem and solving it eg AlphaGo and AlphaGo Zero
(AGZ). However, these problems were not solved on the fly, they took extensive computational resources to generate it's own training data in the case of AGZ it was a 40 day process simulating playing itself to generate the dataset.

However, neural nets need extensive clean training data, either from the real world or by simulation. There was no extensive corpus of air returns with double engine damage for A320's unlike the millions of recorded games of Go for Alpha Go training. Even Sully's effort represents a single instance that is effectively useless for future algorithmic training. Billions of simulations would be necessary just to replicate & solve this exact scenario on this day. For self driving cars they actually model intersections and do billions of simulations to generate the training data to enable self driving: Inside Waymo's Secret World for Training Self-Driving Cars.

There are many things that computers can do exceptionally better than humans, but solving novelty is not one of them using the current leading AI technology.

You're conflating AI with machine learning and with automation.

Furthermore, you're wrong that "AI" requires millions of (recorded games) to learn from. An extensive corpus is not necessarily required. Read about AlphaGo Zero, which used zero recorded games (hence the name). In just three days AlphaGo Zero played 5 million games against itself -- and surpassed the original AlphaGo (the version that beat Lee Sedol), winning against it 100 games to 0. The successor AlphaZero is even more impressive.

Now, AlphaGo Zero's technique is probably not suited to autonomous vehicles, but we don't even need machine learning to safely return Sully's plane back to La Guardia. For a computer, this is actually a much simpler problem, where a constraint satisfaction system combining a finite state machine with path planning would be sufficient.

CurtainTwitcher 19th Jun 2019 09:20


You're conflating AI with machine learning and with automation
Yes, I concede this point. However, an aircraft operates in a total system where some "smarts" either human or AI commands a lower level system to implement the automation to do the manipulation of the aircraft in space. The automation bit is relatively easy, and well established technology.

What most people here are talking about is the smarts, that is the challenge that most are alluding to here. In the case of the Sully, I also concede that the automation level required to implement the turn and configure for a return is relatively straight forward. What I am questioning how to implement the AI command decision to make the turn or take another choice or any one of thousand of possible scenario's that arise across the planet every day for takeoffs and landings with a range of mechanical malfunctions.

How do you write code for each specific scenario? Is every takeoff in a twin jet going be running the double engine failure and return scenario calculation for every takeoff and climb until a return is no longer possible? How about the CX scenario above with one thrust lever at idle and one stuck at 75% for approach. Is the computer going to be running every know scenario constantly? There are a lot of predictable scenario's that haven't yet been seen, and and even more that we can't imagine, but will happen given sufficient time.

I am specifically talking about the implementation of the command aspects of the flying problem, and I thank you for forcing clarification of this point.

As to AlphaGo vs AlphaGo zero, yes I was well aware of the differences between the two, which raises more subtle points. Humans cannot follow the gameplay of AGZ in some case, and see unprecedented & crazy moves which are completely foreign, This issue this raises is reproducibility, how does an algorithm arrive at its decision? It isn't always explainable, or in some cases reproducible which is going to be a legal issue when the inevitable accidents occur. Any highly complex system that meets the real world, human or AI is going to have accidents eventually.

Rated De 19th Jun 2019 09:55


I'd rather have the guy(s) up front making that decision, rather than some anonymous programmer who's having a bad day at his/her keyboard.
Given pilots are strapped to the same machine, always pays to bet on that old horse called self interest!

Global Aviator 19th Jun 2019 10:11

Full automation and aviation. What about the phenomenons that as pilots we see out the window. The CB that is not painting on the radar, the one that is but isn’t. That wind on short final that seems to come from nowhere. So many variables that humans can and do deal with.

Feeling what is obviously wake turbulence and using initiative to avoid further, even though separation exists.

I don’t doubt it will happen, but I feel it will be a loooong way away until it’s the norm.

S speed 19th Jun 2019 11:15

To have truly autonomous aircraft, will require real artificial general intelligence, and that is decades away at best.

The flip side of that coin is that you will then be placing hundreds of human lives in the hands of something you cannot control.

Hopefully I'm pushing up poppies by the time this era comes to be.

Water pilot 19th Jun 2019 14:00

So is the computer going to threaten to land the plane at the nearest airport if a bunch of drunken yahoos onboard start pissing on the floor and molesting the female passengers? Will 400 terrified humans who have just dropped 1000' be mollified by a computer voice saying "BE CALM. ALL IS WELL"? When the plane catches fire, do we just let the pax decide which doors to open? How long would you stay on an automated plane that was apparently stuck on the runway for reasons that you don't understand? I spent four hours on one (hot) plane in Denver waiting for thunderstorms to clear, it would have been pretty damn ugly if there wasn't some authority to both coerce us to staying onboard and assuring us (falsely it turned out) that we were leaving soon. Imagine some rumor while flying across the atlantic that the plane is going the wrong way or that communication has been lost...

I have been on automated trains late at night in big cities, it can get a bit scary even for a guy. On the street you can avoid situations that you can't when you are trapped in a little tube, which is not a natural situation for humans.

Luke_2 19th Jun 2019 14:06

In 15-20 years probably, but now just see what the total fiasco with the 737 Max, and Airbuses crash during this decade.

Add to this possible hacking of the plane. A good reason not to trust (yet) the computers too much.

hexboy 19th Jun 2019 14:38

The reactions of the general public riding in an elevator (lift) which has a malfunction and does its own thing when buttons are pushed should give a clear indication of how
accepting people are of machinery which malfunctions with no one controlling it.
It moves up and down on rails in a concrete shaft so what could possibly go wrong and be scary about that?

SARF 19th Jun 2019 17:24

I’m sorry Dave. I’m afraid I can’t do that

Auxtank 19th Jun 2019 18:57


Originally Posted by SARF (Post 10497846)
I’m sorry Dave. I’m afraid I can’t do that

HAL, shut the f*ck up, I've kicked the tyres now you light the Godamn fires and let's get the hell out of here!


https://cimg6.ibsrv.net/gimg/pprune....d20f04c333.jpg

PropPiedmont 20th Jun 2019 01:03

How many airframes & lives have been saved by human pilots recognizing various types of runway incursions? Delaying a rotation or rotating early? Going around? How many pilots have saved engines and tires on aircraft while avoiding FOD while taxiing? How many ATC controllers have made errors that a human pilot trapped, preventing an accident? What about enroute weather considerations? There are so many different situations that can occur in aviation and it’s human pilots, making human decisions that keep it safe.

Technology and hacking is not what will prevent pilotless airliners, it’s the required human interface that will.

Bend alot 20th Jun 2019 01:57


Originally Posted by Luke_2 (Post 10497719)
In 15-20 years probably, but now just see what the total fiasco with the 737 Max, and Airbuses crash during this decade.

Add to this possible hacking of the plane. A good reason not to trust (yet) the computers too much.

The 737 MAX fiasco is because MCAS needed to be introduced because pilots are in the co-pit - in automated flight (autopilot) MCAS is not required.

Airbus crashes in the last decade? nothing really stands out there that pilots would have saved the day.

It is far easier to protect against hacking/hijacking an aircraft without a cockpit. If a ground base change to flight plan was required it can have a delay and report mode - the main flight data with allowable deviations could be a number of hardware drives placed on the aircraft in a number of locations externally by ground staff for the days planned flights.

Water pilot 20th Jun 2019 03:43

The early successes with AI on chess programs led to a false confidence that we would have machine intelligence shortly thereafter. It was decades after the first reasonable chess playing program before we had any kind of reasonable natural language recognition, which is something that three year olds master. GO is a much more complex game than chess and it was quite an accomplishment to create programs that could win at the game, but even so it is not a very good analogy for real life. Games are significantly easier to create learning networks for than real life because with a game you have a perfect knowledge of your current state (no failed AOA sensors) and a very deterministic outcome -- you either win the game, or you lose it.

There are many challenges to the problem. Pattern recognition is one: it is easy to recognize a pattern in a game, a little harder to recognize a pattern in a picture, and I have no idea how you recognize the pattern of what your jet feels like when it is hit by wake turbulence on takeoff -- but I am sure all of the real pilots here 'instinctively' recognize that pattern and can distinguish it from the feeling of taking off from a wet runway, or in a crosswind, or what it feels like if a tire blows out on takeoff (if that is something that you practice in the sim.)

Weighing the outcome is another major difference between game play and real life. The player with the most enclosed spaces wins the game of GO, so it is pretty simple to score. When the ending condition has been met, count up the number of enclosed squares and the winner is the one with the most of them. In an excellent post earlier, a poster brought up what the "Sully" question -- with no engines, is it better to try to return to the airport with a nonzero probability of accidentally recreating 9/11, or is it better to try a water landing? How do you score the neural network's decision? If 3 times out of 10 the "return to base" scenario kills 1,000 people on the ground is that a failure? What about 3 times out of 100? What are the weather conditions? What if you are an American plane in this situation over Moscow at a time when the US and Russia are on the brink of nuclear war? Does it make a difference if the plane is full of Mexicans, or if Mitch McConnell is onboard? The neural network will faithfully reproduce whatever value decisions that you make (which is one of the real dangers of using AI for police and military work.)

There is also the question of transparency. Do the people onboard have the right to know about these value judgments? I'd certainly like to know if the automatic plane is programmed to self destruct if a failure occurs over a populated area. With a human pilot, I can be fairly sure that the decisions that are made will closely match the decisions that I would make in the same situation, since the pilot shares my fate. A computer pilot doesn't care about survival at all, and a remote pilot knows he is going home at the end of the day no matter what the outcome for the passengers.

It is a tricky issue, and I think it will be a long time before the public cottons onto self driving airplanes, and by that time we may not be able to fly anymore anyway. The problem with the AI approach is that it only works with perfect humans, of which there have been very few in history. The self driving Uber car sounds great, but who wants to get into a car in the morning that got puked all over last night?

moosepileit 20th Jun 2019 04:05

How do you integrate manned and unmanned aircraft into and out of the same runways at current capacity flow?

The controlled choas of max rate landings and departures at KEWR and similar come to mind.

Will ATC have to change? Who will fund that?

Is there enough bandwidth at enough rate globally? If not, who will pay for this? CPDLC is not fast enough for terminal operations, is it? The latest and greatest is probably not up to the task.

Pilotless planes today are not autonomous, someone does the taleoff and landing locally and those that are more automated get a huge airspace restriction around their launch and recovery.

In an unmanned combat vehicle, the weight that was the crew and life support systems becomes increased payload and firepower- do we see those first? Pilot, seat, etc weigh equal a few more missiles and have reduced physical limitations in play.

Pilotless passenger and even cargo aircraft will be clean sheet designs, yet will need all the air condtiing and pressurization systems, so they have pay for themselves just in labor costs, not weight saved and payload gained. Attractive in some principles, but someone new gets to work all the unglamorous bits of the duty.

Each paper gain comes with fresh vulnerabilities.

How will single pilot come into play? Will that also be a fresh generation of aircraft? How will that interact with ATC and bandwidth needs? Who funds that stepping stone? Customers? Governments?

bill fly 20th Jun 2019 07:36

...set aside for dinosaurs like me that actually enjoy a brisk drive through the countryside

Is that in an MG TD, Racer?

^_^

Nialler 20th Jun 2019 13:17


Originally Posted by moosepileit (Post 10498232)
How do you integrate manned and unmanned aircraft into and out of the same runways at current capacity flow?

The controlled choas of max rate landings and departures at KEWR and similar come to mind.

Will ATC have to change? Who will fund that?

Is there enough bandwidth at enough rate globally? If not, who will pay for this? CPDLC is not fast enough for terminal operations, is it? The latest and greatest is probably not up to the task.

Pilotless planes today are not autonomous, someone does the taleoff and landing locally and those that are more automated get a huge airspace restriction around their launch and recovery.

In an unmanned combat vehicle, the weight that was the crew and life support systems becomes increased payload and firepower- do we see those first? Pilot, seat, etc weigh equal a few more missiles and have reduced physical limitations in play.

Pilotless passenger and even cargo aircraft will be clean sheet designs, yet will need all the air condtiing and pressurization systems, so they have pay for themselves just in labor costs, not weight saved and payload gained. Attractive in some principles, but someone new gets to work all the unglamorous bits of the duty.

Each paper gain comes with fresh vulnerabilities.

How will single pilot come into play? Will that also be a fresh generation of aircraft? How will that interact with ATC and bandwidth needs? Who funds that stepping stone? Customers? Governments?

I've worked for nearly forty years in IT, having taken a doctorate along the way.

Systems routinely encounter critical failures. That's the way they are. Not simply that, but they must be maintained. Situations occur which require a patch; the specific situation was not considered at the design stage.

I've worked exclusively on mainframes. With at leas one vendor the sequence number of PTFs rolled past 99,999. A PTF is a programme temporary fix.

Now, you're an airline operator. A mail arrives. It links to a PTF. It is marked "HIPER", meaning high impact and pervasive. The software support contract with your supplier specifies that it must be supplied or your support lapses.

Do you apply it immediately to your fleet and let them fly immediately? What if it introduces a different aberrant behaviour?

The problem is that it might not be just one of your fleet which fails. It could be many.

I must repeat that if you don't maintain software currency your system *will* in time fail. Other users will encounter failures not yet suffered by you, and it may involve circumstances that you may meet in the future.

Hell, systems developed over decades to process something as simple as a bank balance fall over.

It's not simply that I wouldn't fly in an autonomous plane; I would refuse to have any part in any such development project.

Don't get me started on AI.

homonculus 20th Jun 2019 14:01

Would someone kindly define AI for me? The politicians bang on about how it will save the world and any manufacturer claims his product is superior due to AI. But what is AI other than repetitive computing and machine learning? Doesnt quite sound either sexy nor mind blowing......

Nialler 20th Jun 2019 15:12


Originally Posted by homonculus (Post 10498643)
Would someone kindly define AI for me? The politicians bang on about how it will save the world and any manufacturer claims his product is superior due to AI. But what is AI other than repetitive computing and machine learning? Doesnt quite sound either sexy nor mind blowing......

My experience of it is that there is a lot of probability involved. Baye's stuff. I backed away from it the moment I encountered it first.

In the drive to autonomous operation the issue with the MAX stands as a lesson. The key word in AI is that word "Artificial".

OPENDOOR 20th Jun 2019 15:12


How will that interact with ATC and bandwidth needs?
The answer to that is probably here; https://en.wikipedia.org/wiki/Starli...constellation)

As far as pilotless passenger aircraft go I suspect that by the time we have an AI based computer that can handle any event as well as a human pilot it will have bought itself a big watch and done away with humanity.

Auxtank 20th Jun 2019 15:22


Originally Posted by homonculus (Post 10498643)
Would someone kindly define AI for me? The politicians bang on about how it will save the world and any manufacturer claims his product is superior due to AI. But what is AI other than repetitive computing and machine learning? Doesnt quite sound either sexy nor mind blowing......

This is the best examination I've read so far - and I've done a lot of reading on it.


https://cimg3.ibsrv.net/gimg/pprune....b55195648.jpeg


Nialler 20th Jun 2019 15:35

There are ethical issues involved
 
Someone mentioned earlier the trolley problem. It was apt. Your child is in that field beside the field with ten people. Who decides on the decision to crash into that field with one child? The software design team? The programmer? The manufacturer? The operator? Will these ethical decisions be decided on in committees? Will these have joint liability?

Auxtank 20th Jun 2019 15:42


Originally Posted by Nialler (Post 10498711)
Someone mentioned earlier the trolley problem. It was apt. Your child is in that field beside the field with ten people. Who decides on the decision to crash into that field with one child? The software design team? The programmer? The manufacturer? The operator? Will these ethical decisions be decided on in committees? Will these have joint liability?

That problem is examined and answered so well in that book above as he mulls over Asimov's Three Laws.

a_q 20th Jun 2019 15:56

I can see it now...

Currently -
Both pilots get food poisoning, are ill, hostess comes into cabin "Can anyone fly a plane???"

In 20 years time -
Computer gets Blue Screen of Death, hostess comes into cabin "Can anyone program a computer - to fly a plane???"

Auxtank 20th Jun 2019 16:59


Originally Posted by a_q (Post 10498728)
I can see it now...

Currently -
Both pilots get food poisoning, are ill, hostess comes into cabin "Can anyone fly a plane???"

In 20 years time -
Computer gets Blue Screen of Death, hostess comes into cabin "Can anyone program a computer - to fly a plane???"

Maybe not in twenty years time but eventually the answer will be;

Passenger comes forward with his own computer..."No, but this computer here can fly the plane."

DaveReidUK 20th Jun 2019 17:12


Originally Posted by homonculus (Post 10498643)
Would someone kindly define AI for me?

Computers making decisions that have real-life consequences.

A bit like autopilots have been doing for the last 60-odd years. :O


Auxtank 20th Jun 2019 17:20


Originally Posted by DaveReidUK (Post 10498793)
Computers making decisions that have real-life consequences.

A bit like autopilots have been doing for the last 60-odd years. :O

And there it is - the crux of the matter.

Computers making decisions? - or dumbly following programmed code.

There's an awfully big difference.

Your AP is an example of the latter btw.

Autopilots can't author their own algorithms. Therefore they are dumb followers of programmed code.
Making them no more "intelligent' than your washing-machine.

Fusibleplugg 20th Jun 2019 17:55

What happens when the sensors send the autopilots a load of conflicting information? The Autopilots kick out and hand the a/c over to you.
What would a pilotless a/c do in that situation?

CargoOne 20th Jun 2019 18:33


Originally Posted by Nialler (Post 10498711)
Someone mentioned earlier the trolley problem. It was apt. Your child is in that field beside the field with ten people. Who decides on the decision to crash into that field with one child? The software design team? The programmer? The manufacturer? The operator? Will these ethical decisions be decided on in committees? Will these have joint liability?

You have been reading too many newspaper articles named “brave pilots were struggling at controls in a desperate attempt to avoid an orphanage”. In real live no one is avoiding anything, just a matter of choosing a place where is more suitable than other, but even that is extremely rare in a real live, usually it is a boring CFIT.

DaveReidUK 20th Jun 2019 18:55


Originally Posted by Auxtank (Post 10498805)
Autopilots can't author their own algorithms.

Happily, that's true.

AI, on the other hand does indeed "author its own algorithms".

In other words, when a flight under the control of AI meets a situation that hasn't specifically been foreseen by the programmers, nobody can predict WTF it's going to do. :O

Auxtank 20th Jun 2019 19:00


Originally Posted by DaveReidUK (Post 10498877)
Happily, that's true.

AI, on the other hand does indeed "author its own algorithms".

In other words, when a flight under the control of AI meets a situation that hasn't specifically been foreseen by the programmers, nobody can predict WTF it's going to do. :O


And, equally happy as you; that has hasn't happened yet. Because there are no truly AI Autopilots operating - defence experiments aside - yet.
Even the Airbus at Le Bourget was confined by it's programming. It had no free will whatsoever; it was analysing data input and responding with it's pre-programmed instructional code.

Sorry, that's not AI - it's on a level with your toaster on which you spread your breakfast marmalade.

ATC Watcher 20th Jun 2019 20:44


Originally Posted by OPENDOOR (Post 10498691)
How will that interact with ATC and bandwidth needs?The answer to that is probably here; https://en.wikipedia.org/wiki/Starli...constellation).

Problems with that is that we are slowly building a single point of failure; the communications with Sats. Signals from Sats are very weak and can easily be jammed or spoofed or hijacked. Relying only on Sats for Navigation/ position , communications , separation ( anti collision) and now autonomous operations is not a very good option, especially in today's world getting more and more dangerous each day.
In le Bourget this week, in various panel discussions, the onus seem to be on terrestrial 5G or even 6G data transfers using clouds based in strange places like the arctic. We need a back up to satellites and think differently.



All times are GMT. The time now is 20:24.


Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.