rancher: Cannot delete instances that have mount points to a volume which has other active mount points

Rancher-server version - v1.2.1-rc1

Steps to reproduce the problem: In a cattle env, Create an environment. Create Lb service. Create services which have host bind mounts.

In a k8s env, Create few pods. Delete few pods.

wait for the instances to get to “purged” state. wait for the clean up thread to pick up the purged instances.

Purged instances do not get cleaned up the instance table ( and so services,environments also dont get cleaned up)

2016-12-14 06:13:57,809 ERROR [:] [] [] [] [ecutorService-5] [i.c.p.core.cleanup.TableCleanup     ] Skipped "cattle"."healthcheck_instance" where id in [1, 4, 8, 11, 13, 14, 18]
2016-12-14 06:13:58,798 ERROR [:] [] [] [] [ecutorService-5] [i.c.p.core.cleanup.TableCleanup     ] SQL [delete from `instance` where `instance`.`id` in (?, ?, ?, ?, ?, ?, ?)]; Cannot del
ete or update a parent row: a foreign key constraint fails (`cattle`.`service_log`, CONSTRAINT `fk_service_log__instance_id` FOREIGN KEY (`instance_id`) REFERENCES `instance` (`id`) ON DE
LETE NO ACTION ON UPDATE NO ACTION)
2016-12-14 06:13:58,798 ERROR [:] [] [] [] [ecutorService-5] [i.c.p.core.cleanup.TableCleanup     ] Skipped "cattle"."instance" where id in [32, 33, 36, 69, 5, 18, 19, 20, 25, 34]
2016-12-14 06:13:58,931 ERROR [:] [] [] [] [ecutorService-5] [i.c.p.core.cleanup.TableCleanup     ] Skipped "cattle"."service_index" where id in [14, 15, 16]
2016-12-14 06:13:59,053 ERROR [:] [] [] [] [ecutorService-5] [i.c.p.core.cleanup.TableCleanup     ] Skipped "cattle"."image" where id in [2, 18, 19, 20, 26, 28, 29, 30, 31, 32, 34, 35, 69
, 70, 71, 72, 73]
2016-12-14 06:13:59,302 ERROR [:] [] [] [] [ecutorService-5] [i.c.p.core.cleanup.TableCleanup     ] Skipped "cattle"."service" where id in [20, 21, 22]
2016-12-14 06:13:59,443 ERROR [:] [] [] [] [ecutorService-5] [i.c.p.core.cleanup.TableCleanup     ] Skipped "cattle"."environment" where id in [10]
2016-12-14 06:13:59,560 ERROR [:] [] [] [] [ecutorService-5] [i.c.p.core.cleanup.TableCleanup     ] Skipped "cattle"."agent" where id in [3, 11, 15, 16, 17, 19, 20, 33]
2016-12-14 06:14:00,589 ERROR [:] [] [] [] [ecutorService-5] [i.c.p.core.cleanup.TableCleanup     ] Skipped "cattle"."account" where id in [10, 18, 22, 23, 24, 26, 27, 40]
2016-12-14 06:14:00,602 INFO  [:] [] [] [] [ecutorService-5] [i.c.p.core.cleanup.TableCleanup     ] Cleanup other tables [cutoff=Wed Dec 14 06:08:55 UTC 2016]
2016-12-14 06:14:00,602 INFO  [:] [] [] [] [ecutorService-5] [i.c.p.core.cleanup.TableCleanup     ] [Rows Deleted] config_item_status=32 external_event=1 healthcheck_instance_host_map=11
image_storage_pool_map=5 instance_host_map=2 ip_address_nic_map=3 mount=1 nic=3 port=3 service_expose_map=5 volume_storage_pool_map=6 external_handler=4 ip_address=3 volume=6 deployment_u
nit=5 credential=5
2016-12-14 06:14:00,602 INFO  [:] [] [] [] [ecutorService-5] [i.c.p.core.cleanup.TableCleanup     ] [Rows Skipped] nic=3 healthcheck_instance=7 instance=10 service_index=3 image=17 servic
e=3 environment=1 agent=8 account=8
mysql> select id,name,state from instance where state="purged";
+----+-----------------------------------------------+--------+
| id | name                                          | state  |
+----+-----------------------------------------------+--------+
|  5 | scheduler-scheduler-1                         | purged |
| 18 | test1-test1-1                                 | purged |
| 19 | test1-test2-1                                 | purged |
| 20 | test1-lb-1-1                                  | purged |
| 25 | kubernetes-kubernetes-1                       | purged |
| 27 | kubernetes-etcd-data-1                        | purged |
| 29 | kubernetes-kubernetes-kube-hostname-updater-1 | purged |
| 30 | kubernetes-etcd-1                             | purged |
| 32 | kubernetes-rancher-ingress-controller-1       | purged |
| 33 | kubernetes-kubectld-1                         | purged |
| 34 | kubernetes-rancher-kubernetes-agent-1         | purged |
| 36 | scheduler-scheduler-1                         | purged |
| 69 | kubernetes-rancher-kubernetes-agent-1         | purged |
| 70 | testnginx-byhe7                               | purged |
| 71 | testnginx-41x9o                               | purged |
| 72 | testnginx                                     | purged |
| 73 | testnginx                                     | purged |
| 74 | scheduler-scheduler-1                         | purged |
| 75 | scheduler-scheduler-1                         | purged |
| 77 | kubernetes-kubectld-1                         | purged |
| 78 | kubernetes-rancher-kubernetes-agent-1         | purged |
+----+-----------------------------------------------+--------+

Volumes continue to be in “detached” state . Is this a problem? Will these volumes be ever purged automatically ?

select id,name,state,removed from volume;

|  42 | test1                                                            | detached | NULL                |
|  44 | test2                                                            | detached | NULL                |

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 1
  • Comments: 21 (8 by maintainers)

Most upvoted comments

There are foreign keys to limit to delete records in table directly. so we have to delete records which used in other tables.

DB cleanup failed will make rancher not stable.

For now,I have to use my tool to find all deps and clean up them manually.

For example, the delete from environment where state in ('removed') image

BTW I used this for get all tables get all references

select * from `INFORMATION_SCHEMA`.`KEY_COLUMN_USAGE` where `CONSTRAINT_SCHEMA`='cattle'

Fresh install of 1.6.15.

task.table.cleanup.schedule=30 main_tables.purge.after.seconds=180
service_log.purge.after.seconds =120 events.purge.after.seconds=180

Create an environment. Create Lb service. Create services which have host bind mounts ( anonymous as well). Delete the service. Wait for instances to get to “purged” state. Wait for the cleanUp Thread to be executed.

Instances continue to remain in “purged” state and cleanupThread is not able to clean them,

mysql> select id,name,state,created,removed from instance where state = "purged";
+----+---------------------------+--------+---------------------+---------------------+
| id | name                      | state  | created             | removed             |
+----+---------------------------+--------+---------------------+---------------------+
|  2 | scheduler-scheduler-1     | purged | 2018-04-03 20:59:19 | 2018-04-03 21:01:06 |
|  3 | healthcheck-healthcheck-1 | purged | 2018-04-03 20:59:19 | 2018-04-03 21:01:06 |
| 30 | scheduler-scheduler-1     | purged | 2018-04-03 21:01:07 | 2018-04-03 21:07:18 |
| 32 | scheduler-scheduler-1     | purged | 2018-04-03 21:07:17 | 2018-04-03 21:16:45 |
| 34 | test1-lb-1-1              | purged | 2018-04-03 21:08:19 | 2018-04-03 21:10:43 |
| 40 | scheduler-scheduler-1     | purged | 2018-04-03 21:15:49 | 2018-04-03 21:27:03 |
+----+---------------------------+--------+---------------------+---------------------+
6 rows in set (0.00 sec)

Upgrade to 1.6.16-rc1.

Once CleanupThread gets executed , all the above instances in “purged” state gets cleaned up from DB.