结合源码和日志分析 mesos agent recover 过程

在 agent recover 的时候,一般 agent 会去 reconnect executors ,如果 reconnect 成功,就会 Re-registering executor,如果 Re-registering executor 也成功了,就会是正常的 RUNNING 状态,如果失败了话, slave 会主动的 Cleaning up un-reregistered executors, 过程就是下面的三步。

executor 从 container 中退出  ->  销毁 container  -> 执行 `docker stop` 

如果想很明确知道这个过程的话,我附上了两种情况的日志输出情况,可对比参看:

如下显示一个正常 recover 过程。

I0612 18:54:51.033638 31021 status_update_manager.cpp:208] Recovering executor 'test-all.aec2a60c-2d59-11e6-ae07-56847afe9799' of framework 56f543ec-80eb-4a4c-a411-aef33db3b71d-0000

I0612 18:54:51.035840 31023 docker.cpp:801] Recovering Docker containers
I0612 18:54:51.036258 31021 containerizer.cpp:407] Recovering containerizer
I0612 18:54:51.036924 31021 containerizer.cpp:456] Skipping recovery of executor 'test-all.aec2a60c-2d59-11e6-ae07-56847afe9799' of framework 56f543ec-80eb-4a4c-a411-aef33db3b71d-0000 because it was not launched from mesos containerizer

I0612 18:54:51.069540 31018 provisioner.cpp:245] Provisioner recovery complete
I0612 18:54:51.192268 31020 docker.cpp:905] Recovering container '675c8baf-906b-46df-a3ed-44632e899f27' for executor 'test-all.aec2a60c-2d59-11e6-ae07-56847afe9799' of framework '56f543ec-80eb-4a4c-a411-aef33db3b71d-0000'

I0612 18:54:51.193879 31023 slave.cpp:4511] Sending reconnect request to executor 'test-all.aec2a60c-2d59-11e6-ae07-56847afe9799' of framework 56f543ec-80eb-4a4c-a411-aef33db3b71d-0000 at executor(1)@10.23.85.182:27242
I0612 18:54:51.194742 31023 slave.cpp:2820] Re-registering executor 'test-all.aec2a60c-2d59-11e6-ae07-56847afe9799' of framework 56f543ec-80eb-4a4c-a411-aef33db3b71d-0000

I0612 18:54:53.194758 31019 slave.cpp:4571] Finished recovery
I0612 18:54:53.195210 31019 slave.cpp:797] New master detected at master@10.23.85.127:5050
I0612 18:54:53.195333 31019 slave.cpp:822] No credentials provided. Attempting to register without authentication
I0612 18:54:53.195354 31019 slave.cpp:833] Detecting new master
I0612 18:54:53.195386 31019 status_update_manager.cpp:174] Pausing sending status updates
I0612 18:54:53.342232 31019 slave.cpp:1072] Re-registered with master master@10.23.85.127:5050
I0612 18:54:53.342296 31019 slave.cpp:1108] Forwarding total oversubscribed resources
I0612 18:54:53.342358 31019 status_update_manager.cpp:181] Resuming sending status updates

I0612 18:54:53.342737 31018 slave.cpp:2255] Updating framework 56f543ec-80eb-4a4c-a411-aef33db3b71d-0000 pid to scheduler-4a4080c2-3fc5-4bc5-b557-2a33f49de8cb@10.23.85.126:14617
I0612 18:54:53.342737 31019 status_update_manager.cpp:181] Resuming sending status updates

如下显示一个非正常 recover 过程:

I0624 14:45:58.824522 10247 slave.cpp:5482] Recovering executor 'yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799' of framework 20141201-145651-1900714250-5050-3484-0000
I0624 14:45:58.826632 10241 status_update_manager.cpp:200] Recovering status update manager
I0624 14:45:58.826674 10241 status_update_manager.cpp:208] Recovering executor 'yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799' of framework 20141201-145651-1900714250-5050-3484-0000

I0624 14:45:58.828812 10250 docker.cpp:801] Recovering Docker containers
I0624 14:45:58.829248 10251 containerizer.cpp:407] Recovering containerizer
I0624 14:45:58.830082 10251 containerizer.cpp:456] Skipping recovery of executor 'yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799' of framework 20141201-145651-1900714250-5050-3484-0000 because it was not launched from mesos containerizer
I0624 14:45:59.071164 10242 docker.cpp:905] Recovering container '9ec80365-2d18-47db-a4ed-4a425e52b54c' for executor 'yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799' of framework '20141201-145651-1900714250-5050-3484-0000'
I0624 14:45:59.074317 10242 docker.cpp:1960] Executor for container '9ec80365-2d18-47db-a4ed-4a425e52b54c' has exited
I0624 14:45:59.074390 10242 docker.cpp:1724] Destroying container '9ec80365-2d18-47db-a4ed-4a425e52b54c'

I0624 14:45:59.074527 10242 docker.cpp:1852] Running docker stop on container '9ec80365-2d18-47db-a4ed-4a425e52b54c'

I0624 14:46:02.154532 10248 group.cpp:427] Trying to create path '/mesos-jylt-online02' in ZooKeeper
I0624 14:46:02.156792 10251 detector.cpp:152] Detected a new leader: (id='146')
I0624 14:46:02.156997 10244 group.cpp:700] Trying to get '/mesos-jylt-online02/json.info_0000000146' in ZooKeeper
I0624 14:46:02.158434 10246 detector.cpp:479] A new leading master (UPID=master@10.153.125.1:5050) is detected

I0624 14:46:04.497295 10251 slave.cpp:4511] Sending reconnect request to executor 'yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799' of framework 20141201-145651-1900714250-5050-3484-0000 at executor(1)@10.153.96.23:63146

E0624 14:46:04.498430 10251 slave.cpp:3876] Termination of executor 'yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799' of framework 20141201-145651-1900714250-5050-3484-0000 failed: Container '9ec80365-2d18-47db-a4ed-4a425e52b54c' not found
I0624 14:46:04.498478 10251 slave.cpp:3033] Handling status update TASK_FAILED (UUID: d833db9a-c43a-4458-831b-31ef2abf465b) for task yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799 of framework 20141201-145651-1900714250-5050-3484-0000 from @0.0.0.0:0
E0624 14:46:04.498873 10241 slave.cpp:3283] Failed to update resources for container 9ec80365-2d18-47db-a4ed-4a425e52b54c of executor 'yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799' running task yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799 on status update for terminal task, destroying container: Container '9ec80365-2d18-47db-a4ed-4a425e52b54c' not found
I0624 14:46:04.500653 10249 status_update_manager.cpp:320] Received status update TASK_FAILED (UUID: d833db9a-c43a-4458-831b-31ef2abf465b) for task yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799 of framework 20141201-145651-1900714250-5050-3484-0000

I0624 14:46:04.500700 10249 status_update_manager.cpp:824] Checkpointing UPDATE for status update TASK_FAILED (UUID: d833db9a-c43a-4458-831b-31ef2abf465b) for task yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799 of framework 20141201-145651-1900714250-5050-3484-0000

W0624 14:46:04.502094 10249 slave.cpp:3377] Dropping status update TASK_FAILED (UUID: d833db9a-c43a-4458-831b-31ef2abf465b) for task yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799 of framework 20141201-145651-1900714250-5050-3484-0000 sent by status update manager because the slave is in RECOVERING state
I0624 14:46:06.498636 10241 slave.cpp:2973] Cleaning up un-reregistered executors
I0624 14:46:06.498718 10241 slave.cpp:4571] Finished recovery

I0624 14:46:06.499346 10245 status_update_manager.cpp:174] Pausing sending status updates
I0624 14:46:06.499390 10248 slave.cpp:797] New master detected at master@10.153.125.1:5050
I0624 14:46:06.499508 10248 slave.cpp:822] No credentials provided. Attempting to register without authentication
I0624 14:46:06.499549 10248 slave.cpp:833] Detecting new master
I0624 14:46:06.741399 10247 slave.cpp:1072] Re-registered with master master@10.153.125.1:5050
I0624 14:46:06.741472 10247 slave.cpp:1108] Forwarding total oversubscribed resources
I0624 14:46:06.741567 10252 status_update_manager.cpp:181] Resuming sending status updates

W0624 14:46:06.741739 10252 status_update_manager.cpp:188] Resending status update TASK_FAILED (UUID: d833db9a-c43a-4458-831b-31ef2abf465b) for task yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799 of framework 20141201-145651-1900714250-5050-3484-0000

W0624 14:46:06.742099 10241 status_update_manager.cpp:188] Resending status update TASK_FAILED (UUID: d833db9a-c43a-4458-831b-31ef2abf465b) for task yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799 of framework 20141201-145651-1900714250-5050-3484-0000

I0624 14:46:06.742238 10247 slave.cpp:3431] Forwarding the update TASK_FAILED (UUID: d833db9a-c43a-4458-831b-31ef2abf465b) for task yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799 of framework 20141201-145651-1900714250-5050-3484-0000 to master@10.153.125.1:5050

 yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799 of framework 20141201-145651-1900714250-5050-3484-0000
I0624 14:46:06.755786 10241 status_update_manager.cpp:824] Checkpointing ACK for status update TASK_FAILED (UUID: d833db9a-c43a-4458-831b-31ef2abf465b) for task yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799 of framework 20141201-145651-1900714250-5050-3484-0000
I0624 14:46:06.757225 10241 slave.cpp:3996] Cleaning up executor 'yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799' of framework 20141201-145651-1900714250-5050-3484-0000 at executor(1)@10.153.96.23:63146
I0624 14:46:06.757438 10244 gc.cpp:55] Scheduling '/data/mesos/slaves/75633c0c-a508-46d1-b2b0-0db58ea6b878-S12/frameworks/20141201-145651-1900714250-5050-3484-0000/executors/yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799/runs/9ec80365-2d18-47db-a4ed-4a425e52b54c' for gc 6.99999123395852days in the future
I0624 14:46:06.757504 10244 gc.cpp:55] Scheduling '/data/mesos/slaves/75633c0c-a508-46d1-b2b0-0db58ea6b878-S12/frameworks/20141201-145651-1900714250-5050-3484-0000/executors/yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799' for gc 6.99999123346074days in the future
I0624 14:46:06.757505 10241 slave.cpp:4084] Cleaning up framework 20141201-145651-1900714250-5050-3484-0000
I0624 14:46:06.757562 10244 gc.cpp:55] Scheduling '/data/mesos/meta/slaves/75633c0c-a508-46d1-b2b0-0db58ea6b878-S12/frameworks/20141201-145651-1900714250-5050-3484-0000/executors/yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799/runs/9ec80365-2d18-47db-a4ed-4a425e52b54c' for gc 6.99999123306667days in the future
I0624 14:46:06.757603 10244 gc.cpp:55] Scheduling '/data/mesos/meta/slaves/75633c0c-a508-46d1-b2b0-0db58ea6b878-S12/frameworks/20141201-145651-1900714250-5050-3484-0000/executors/yebo.audit-jy.audit-jy.bce40fe6-3918-11e6-93b3-56847afe9799' for gc 6.9999912327763days in the future

因此,在升级的时候或者 mesos agent 重连的情况下,大多数任务的 executor 都能被 recover,从而被 Re-registering 而正常继续运行。少部分会出现另一种导致 TASK_FAILED 的情况。

原文地址:https://www.cnblogs.com/qianggezhishen/p/7349324.html