> "If the captain could figure it out, so could the computer."
The autopilot had disengaged, most likely because the pitot tubes had iced over.
The aircraft system entered ALT2 mode, where bank-angle protection is lost. Protection for angle-of-attack is also lost when 2 or more input references are lost.
You might describe these circumstances as the computer saying "I don't know what the heck's going on, you humans figure it out please".
As a former engineer who worked on the 757 flight control system, I am not terribly impressed with that design.
Having 3 pitot tubes iced over means they read 0 velocity. It is reasonable for the computer to be designed to recognize that if all three pitot tubes read 0, then the pitot tubes are the problem. With the altimeter unwinding, it should be able to recognize a stall. With the turn and bank indicator, and the AOA indicator, it should be able to return to straight and level.
Recall that the captain figured it out at a glance and knew exactly what to do.
The FAA report[1] gives a more comprehensive description of events.
The pitot tubes had differential icing, and didn't all read 0kts – they reported different velocity against each tube, such as 40kts or 60kts (against an expected baseline of ~ 275kts). The computer correctly recognised the data was invalid and rejected it.
It's a common narrative that the captain immediately figured out the issue. The report and transcript of the cockpit recording[2] notes that the captain's interventions showed that he had not identified the stall, nor had the copilots.
~ cockpit recording ~
0:00 autopilot disconnects
0:01 [copilot right] "I have the controls"
0:11 [copilot right] "We haven't got a good display of speeds"
1:26 captain enters cockpit
1:30 [copilot right] "I don’t have control of the airplane at all"
1:38 [captain] "Er what are you doing?"
3:37 [captain] "No no don't climb"
4:00 [captain] "Watch out you’re pitching up there"
4:02 [copilot right] "Well we need to we are at four thousand feet"
4:23 ~ recording stops ~
[1] https://www.faa.gov/sites/faa.gov/files/AirFrance447_BEA.pdf
[2] https://bea.aero/uploads/tx_elyextendttnews/annexe.01.en.pdf
Is it possible that 40/60 kts indicates a stall? Nevertheless, the drop in altitude while the nose was up should also indicate a stall.
I know that designing avionics, and accounting for all possible scenarios is a difficult job, and we learn from the failures. But I don't buy that it was impossible/impractical for the avionics to figure out what was going on based what the other instruments were saying.
I agree that comparing the various sensor data points could allow a reasonable conclusion: e.g. IAS is variable across sensors therefore IAS is unreliable, so what additional information could allow a reasonable diagnosis?
The flight system could identify a stall and prominently alert the pilots. That's one of the recommendations from the report: to implement a dedicated stall warning. The stall warning was actually active, but disregarded/unrecognised by the pilots because of the number of other simultaneous alarms and extraneous information, including an intermittent recommendation from the Flight Director system to pitch up at 12°.
In general, Airbus aircraft don't have a dedicated AOA indicator visible to the pilots; instead AOA is visualised to the pilots by proxy via the airpeed indicator.
For AF447 the flight avionics probably had enough information to bring the aircraft back to straight and level flight without pilot input.
On the other hand the 737 Max crashes were attributed to MCAS overriding the pilot input and lowering the nose, in response to incorrect/faulty AOA sensor data.
Both were extreme examples, and the recommendations probably coalesce somewhere in the middle: better information (and alert prioritisation) for pilots and redundancy in sensors and logic.
Air Astana Flight 1388 also comes to mind. I'm not sure how a flight control system would deduct cross-connected aileron controls and adapt accordingly (without introducing other risks or failure modes). Given the glacial pace of change and approval in aviation, we're probably 20–50 years away from that level of autonomy.
I personally find his work and his posts interesting, and enjoy seeing them pop up on HN.
If you prefer not to see his posts on the HN list pages, a practical solution is to use a browser extension (such as Stylus) to customise the HN styling to hide the posts.
Here is a specific CSS style which will hide submissions from Jeff's website:
In this example, I've made it almost invisible, whilst it still takes up space on the screen (to avoid confusion about the post number increasing from N to N+2). You could use { display: none } to completely hide the relevant posts.
The approach can be modified to suit any origin you prefer to not come across.
The limitation is that the style modification may need refactoring if HN changes the markup structure.
I stopped following this guy back in 2015 when he straight up forked all of my ansible roles and then published everything to Ansible Galaxy before mine were even complete, tested and ready to be published, and only for me to find that the same day they were all forked by him a new Github organization with the name of the org I had used in my roles had been registered and then squatted, it completely turned me off to his methods.
There are many common pesticides which have extreme toxicity to humans, including HCN (Hydrogen Cyanide), (ab)used under the brand-name Zyklon B in WW2, and still sold today as a (controlled-use) pesticide under generic brand names.
It's a chasm-leap to say that pesticides are generally safe to humans.
For additional context, the "Nuclear Device" in question is a SNAP-19C, an RTG (Radioisotope Thermoelectric Generator), which generates electricity via radioactivity decay (approx 30W for this model).
The SpotCast capability does accept that information (which is how it's used in the app). I need to expose it in the docs, thanks for pointing it out. Will update shortly.
Thanks for updating the docs[1] with the vessel parameter, I may take this for a spin.
I'd also find it valuable if the AI might accomodate context for the trip - some examples: "scuba diving", "fishing", "match racing", "touring". Do you think that's feasible with the models you have available?
Regarding your other question about routes...in the app that is what is called a TripCast where you enter the departure and destination locations and it will determine the forecast for that trip. TripCasts are not currently exposed in the API, just the SpotCasts capability right now which we felt would have broader appeal. Exposing TripCasts in the API is a planned capability and if there is demand for it, we can expose sooner than later.
We've known that humans have been harnessing natural fire (e.g. sticks/vegetation set alight by lightening) for over a million years.
However, until last week, we thought that the earliest point of humans _deliberately creating_ fire – e.g. through flint and tinder – was 50,000 years ago.
A new find has dated the first instance of deliberate fire to be 400,000 years ago (probably by early Neanderthals).
So I agree - the archaeological evidence and our interpretation of history is spotty at best.
reply