Air Traffic System Failure
Thread Starter
Join Date: Feb 2007
Location: Australia
Posts: 10
Likes: 0
Received 0 Likes
on
0 Posts
Air Traffic System Failure
ATIS ATIS YBBN O 210409
RWY: 01
+ OPR INFO: NO DEPARTURES DUE SYSTEM FAILURE TILL TIME 0445.
START CLEARANCE REQUIRED
WND: 360/20
VIS: GT 10 KM
CLD: FEW030 FEW040
TMP: 29
QNH: 1014
RWY: 01
+ OPR INFO: NO DEPARTURES DUE SYSTEM FAILURE TILL TIME 0445.
START CLEARANCE REQUIRED
WND: 360/20
VIS: GT 10 KM
CLD: FEW030 FEW040
TMP: 29
QNH: 1014
Does anyone have some information on the closure of Brisbane Airport to all departures for the next 45 minutes?
Join Date: Oct 2007
Location: Australia
Posts: 8
Likes: 0
Received 0 Likes
on
0 Posts
Hi there, first post.
Looks like its all of YBBB having dramas.....
Mackay tower explained that the flight data processor in Brissy has failed in parts and the redundancy didn't come back on, so they shut it down and upon restart, the data appeared to be lost, however it's not necessarily lost, it hasn't been incorporated into the system.
Bet they're having fun in the BNE ops room right now.....
Looks like its all of YBBB having dramas.....
Mackay tower explained that the flight data processor in Brissy has failed in parts and the redundancy didn't come back on, so they shut it down and upon restart, the data appeared to be lost, however it's not necessarily lost, it hasn't been incorporated into the system.
Bet they're having fun in the BNE ops room right now.....
Last edited by OmniRadial; 21st Jan 2009 at 04:26.
Flight Data Processor Failure
Not only do controllers need to enter flight plan details manually for each sector a flight will transit, the co-ordination requirements between controllers changes. Each console becomes in effect a mini TAAATS system, that does not talk to any other TAAATS system, so controllers need to give full coordination for each flight, and full radar handovers as the normal system tools do not work.
Bear in mind also that there has been no simulator refresher training program for over 2 years!
Bear in mind also that there has been no simulator refresher training program for over 2 years!
Thread Starter
Join Date: Feb 2007
Location: Australia
Posts: 10
Likes: 0
Received 0 Likes
on
0 Posts
Bear in mind also that there has been no simulator refresher training program for over 2 years!
Join Date: Feb 2004
Location: brisvegas
Posts: 64
Likes: 0
Received 0 Likes
on
0 Posts
It`s my fault! I turned off the machine that goes 'ping' as i walked out the door today. I thought we had a plan if something fell over. Oh, now i remember. Everybody STOP.......
Crikey, i hope the airconditioning is still working `cos there will be a few sweaty palms atm.
Crikey, i hope the airconditioning is still working `cos there will be a few sweaty palms atm.
So how could you ATC's possibly remember how to safely operate this computer system when you don't receive refresher training?
"Show me one incident in the room that has been caused by not having simulator refresher training"
But seriously, this type of failure is similar to what occurs in system data upgrades. The data upgrades are planned shutdowns that are coordinated and happen at times of low/no traffic (night).
So, if your ATC has been rostered for a doggo on which there was an upgrade recently...
Join Date: Sep 2003
Location: Australia
Posts: 18
Likes: 0
Received 0 Likes
on
0 Posts
Computer glitch delays flights
Article from: The Courier-Mail
FLIGHTS out of Brisbane Airport are running up to an hour and a half behind schedule after an air traffic control computer glitch grounded aircraft.
Flights finally began to be cleared for takeoff about 3.50pm (AEST) after the delay.
A spokeswoman from Qantas said there were "some delays" of up to 90 minutes as a result of air traffic control technical issues in Brisbane.
"We have experienced some delays of 60 to 90 minutes," the spokeswoman said.
"We don't have a definitive number as to how many were delayed, we're still in the process as it's been back up and running as of 3.50pm."
Computer glitch delays flights | The Australian
FLIGHTS out of Brisbane Airport are running up to an hour and a half behind schedule after an air traffic control computer glitch grounded aircraft.
Flights finally began to be cleared for takeoff about 3.50pm (AEST) after the delay.
A spokeswoman from Qantas said there were "some delays" of up to 90 minutes as a result of air traffic control technical issues in Brisbane.
"We have experienced some delays of 60 to 90 minutes," the spokeswoman said.
"We don't have a definitive number as to how many were delayed, we're still in the process as it's been back up and running as of 3.50pm."
Computer glitch delays flights | The Australian
But seriously, this type of failure is similar to what occurs in system data upgrades. The data upgrades are planned shutdowns that are coordinated and happen at times of low/no traffic (night).
So, if your ATC has been rostered for a doggo on which there was an upgrade recently...
So, if your ATC has been rostered for a doggo on which there was an upgrade recently...
I am sure the tossers will find some way of blaming the controllers. If noone smashes into someone else during this, that fu ck wit Greg should at long last realise his workforce is extremely professional and should be ashamed at how he has treated them and described their motives.
stupid
TID EDIT. Offensive, whichever way you look at it.
Could an ATCO give us a brief rundown on how the backup systems work? I was under the impression that the two centres computers "ghost" each other so that if one centre has a major failure (eg burnt down), the other centre's computer would have all the flight data and would be able to take over controlling the other FIR - assuming that there are sufficient ATCO's.
No, the centres arre entirely stand alone. the flight data processor is the braioin of the whole thing. Back in the day we had paper strips with the trackng and level information written on them and when it changed, we wrote the new details on them. Every controller had a strip (or multiple) that showed where the aircraft were going. What were called ADSOs (airways data systems officers) and before that Flight Datas used to prepare the strips from hard copy flight plans that the pilots would submit.
Today, the flight data processor gets all that info and sends it to every console and updates the info as it changed. The FDP links all the consoles together so much of the info is not required to be physically coordinated. This made it extremely efficient in areas where the separation is based solely on the numbers. Where I work, we use to be 4 Flight Service consoles and 2 ATC consoles. Now we do it ALL with 1 person. When the FDP falls over, those efficiencies are lost.
If it was a full FDP fail, it is a legitimately big deal.
Today, the flight data processor gets all that info and sends it to every console and updates the info as it changed. The FDP links all the consoles together so much of the info is not required to be physically coordinated. This made it extremely efficient in areas where the separation is based solely on the numbers. Where I work, we use to be 4 Flight Service consoles and 2 ATC consoles. Now we do it ALL with 1 person. When the FDP falls over, those efficiencies are lost.
If it was a full FDP fail, it is a legitimately big deal.
Join Date: Dec 2003
Location: Black stump
Posts: 62
Likes: 0
Received 0 Likes
on
0 Posts
In disaster recovery, it is possible to re-configure a centre (ML or BN) to operate the other centre's airspace ... except that everyone would be in a "degraded" mode due reduced number of workstations to manage the airspace.
But this is disaster recovery ... not degraded modes ...
If a major processor (eg flight data processor) fails, then its secondary unit is supposed to take over.
If both processors fail, then the centre is in degraded mode, and needs to reduce traffic to low levels so that the controllers can manage the significant loss of functionality.
Degraded modes ops is not the same as disaster recovery (which may take a couple days to implement)
But this is disaster recovery ... not degraded modes ...
If a major processor (eg flight data processor) fails, then its secondary unit is supposed to take over.
If both processors fail, then the centre is in degraded mode, and needs to reduce traffic to low levels so that the controllers can manage the significant loss of functionality.
Degraded modes ops is not the same as disaster recovery (which may take a couple days to implement)
Pleaseeeeeee
Chapi,
I'm shocked...."Disaster Recovery".
Dear me no no no....
AsA spin says though shalt not use the word "disaster".
Business Continuity please!
I'm shocked...."Disaster Recovery".
Dear me no no no....
AsA spin says though shalt not use the word "disaster".
Business Continuity please!
Join Date: Jan 2008
Location: Melbourne, Vic
Posts: 46
Likes: 0
Received 0 Likes
on
0 Posts
Chapi et al,
Disaster recovery is one of those things that sounded great when Thomson/Thales were selling us TAAATS.
It is, in effect, a great big fairy tale.
It has never been realistically tested (apart from showing that we can, indeed, reconfigure the sim to look something like Brisbane or Melbourne).
The logistics involved in firing up either centre are huge (how do you get the required controllers from one centre to the other?)
One of the biggest stumbling blocks is coms - I recently saw the TAS disaster recovery plan for VHF coms - it is so out of date it is not funny - I doubt anyone would get anywhere near their correct frequencies.
All the people involved in putting together the business continuity plans (disaster recovery) have long been hounded out of the organisation so none of the planning is up to date with recent (and not so recent) changes.
Quite frankly, the safest way is for the affected centre to sit tight and wait for the cavalry.
Disaster recovery is one of those things that sounded great when Thomson/Thales were selling us TAAATS.
It is, in effect, a great big fairy tale.
It has never been realistically tested (apart from showing that we can, indeed, reconfigure the sim to look something like Brisbane or Melbourne).
The logistics involved in firing up either centre are huge (how do you get the required controllers from one centre to the other?)
One of the biggest stumbling blocks is coms - I recently saw the TAS disaster recovery plan for VHF coms - it is so out of date it is not funny - I doubt anyone would get anywhere near their correct frequencies.
All the people involved in putting together the business continuity plans (disaster recovery) have long been hounded out of the organisation so none of the planning is up to date with recent (and not so recent) changes.
Quite frankly, the safest way is for the affected centre to sit tight and wait for the cavalry.
Join Date: Dec 2003
Location: Black stump
Posts: 62
Likes: 0
Received 0 Likes
on
0 Posts
Ooopps,
ML & BN are two independent systems;
When the FDP fails its a real problem for the centre;
Degraded modes ops means slooowwing everything down - 'business continuity'
In a catasrophic event (eg BN centre burns down), we are supposed to be able to recover by moving to the other centre. I think the "correct" term is 'business resumption'
It's a plan - hope I am not on to try for real.
ML & BN are two independent systems;
When the FDP fails its a real problem for the centre;
Degraded modes ops means slooowwing everything down - 'business continuity'
In a catasrophic event (eg BN centre burns down), we are supposed to be able to recover by moving to the other centre. I think the "correct" term is 'business resumption'
It's a plan - hope I am not on to try for real.
Join Date: Sep 2006
Location: melbourne
Posts: 90
Likes: 0
Received 0 Likes
on
0 Posts
GB - as mentioned above, the 'one centre taking over from the other' is only theoretical; never been tried or tested; and even according to the script, the first response would be about 3 days later!
'Max' capacity, which would still be way short of normal, would take about 1 - 2 weeks to achieve.
And that's after all the spare controllers get moved to the recovery centre. Oh, hang on - we don't have any spare controllers.....
'Max' capacity, which would still be way short of normal, would take about 1 - 2 weeks to achieve.
And that's after all the spare controllers get moved to the recovery centre. Oh, hang on - we don't have any spare controllers.....