PPRuNe Forums - View Single Post - NATS interview process
View Single Post
Old 22nd Aug 2008, 11:10
  #1350 (permalink)  
Sandgrounder
 
Join Date: Jan 2005
Location: London
Posts: 11
Likes: 0
Received 0 Likes on 0 Posts
Zooker = joker

From an occupational psyc point of view it probably goes a little something like this:

Recruitment is expensive creating a pressure to limit the process, particularly face to face meetings in hotels etc. However, training someone who then fails (a "false positive") is even more expensive creating a pressure for the process to be robust and end up selecting people likely to succeed - so cutting the process down too much would be counter-productive.

So, how to assess what tests predict successful controllers? Probably try out some cost-efficient tests thought to reflect skills important in ATC on current controllers and, if possible, compare their scores with some objective measure of their actual performance as an ATCO. The tests where scores distinguish the better controllers are more likely to be useful.

For monitoring how the tests chosen are then performing in practice, it is likely some assessment is made of test scores against data on whether people then pass / fail college, their scores on different tests at college, and also whether they validate and at how "difficult" a unit.

With enough data, statistical models that predict how likely someone is to e.g. validate anywhere, according to tests scores can be created. So for example, perhaps on average those who scored over 90% on the cubes are 10% more likely to validate than those who scored less than 70% (but still got in). There will be similar predictive coefficients attached to performance on the other tests.

I have faith something like that the above has been done so there is probably data suggesting higher performance on the tests still used, including the cubes, are predictive of higher performance in the job / likelihood of validating. Otherwise there’s no point using them and administering them is an unnecessary cost.

Pinning down what "concept" a test actually measures and what to call that "concept" is always a matter of debate in psychology but the fact that you have to rotate the cubes in your head and also visualise in 3D when doing radar does mean it has "face validity", and the fact that presumably the best radar controllers tend to be good at the cube test suggests "construct validity". But in the end who cares what we call the concept - it's all about definitions and operationalisation anyway. If you define "spatial awareness" as the ability to visualise and rotate objects in 3D before you start, then job done.

Of course, in an ideal world, you’d put all prospective applicants in the college sims / live training for a week as those are more closely related to doing the job, but the reduction in “false positives” wouldn’t be enough to offset the massive cost of doing this. There may be some “false negatives” (people who could do the job but somehow fail on the tests that are used) as result and it’s these people that the selection process is harshest on – obviously difficult to get data on or estimate how many of these there might be. From a recruiter’s point of view, false negatives don’t cost them anything as long as they’re still meeting their head count / tests aren’t too stringent.

Anyway, always helps as an applicant I think, to know where the recruiter is coming from / why they’re doing what they’re doing to you!
Sandgrounder is offline