Page MenuHomePhabricator

ClearUserWatchlistJob/WatchedItemStore::removeWatchBatchForUser bad database peformance on enwiki and others, causing database lag
Closed, ResolvedPublic

Description

Some alerts got received of lag on enwiki:

[11:33] <icinga-wm> PROBLEM - MariaDB sustained replica lag on db1106 is CRITICAL: 2.4 ge 2 https://wikitech.wikimedia.org/wiki/MariaDB/troubleshooting#Replication_lag https://grafana.wikimedia.org/d/000000273/mysql?orgId=1&var-server=db1106&var-port=9104

Lag issue seems to have started at around 11:20, with that host and some others lagging behing several seconds.

While I am not yet 100% sure of the cause, tendril gave us the top offender in long running queries in the last hour:

Hits 	Tmax 	Tavg 	Tsum 	Hosts 	Users 	Schemas
73	72	40	2,947	db1105:3311, db1144:3314, db1146:3314	<mw user>	commonswiki, enwiki
SELECT /* ClearUserWatchlistJob::run */ wl_id AS `value` FROM `watchlist` WHERE wl_user = <id> AND (wl_id <= <id2>) ORDER BY wl_id ASC LIMIT 100 /* <hash> db1146:3314 commonswiki 41s */

@Joe told me those were (possibly) jobs generated by newly deployed feature "watchlist expiry"

This means that several dozens of long running queries are probably running in the last hour, until they are killed by the query killer.

Event Timeline

Restricted Application added a subscriber: Aklapper. · View Herald Transcript
Marostegui subscribed.

@ifried this needs to be looked at before the break I would say. During the holidays break there will be reduced availability and if this increases it can become a serious problem.
We should decide whether we are able to get this fixed before the break and/or disable the functionality on big wikis if we are not confident about it, especially with reduced availability in the upcoming days.

The master had a huge spike on deletes at 11:20, which matches the above graphs

Captura de pantalla 2020-12-18 a las 14.28.02.png (800×1 px, 108 KB)

The DELETEs are like (extracted from the master binlog):

DELETE /* WatchedItemStore::updateOrDeleteExpiries  */ `watchlist_expiry` FROM `watchlist_expiry`, `watchlist` WHERE we_item=wl_id  AND ((wl_user = xxx AND wl_namespace = 2 AND wl_title = 'xxx'))

@Joe told me those were (possibly) jobs generated by newly deployed feature "watchlist expiry"

The ClearUserWatchlist job has been around for a while (since 1.31). But per T249782, d09437003 added a line to delete from the watchlist_expiry table in this job. And that query looks different from what @Marostegui posted:

$dbw->delete( 'watchlist_expiry', [ 'we_item' => $watchlistIds ], __METHOD__ );

To summarize all my findings above, there is 3 "bad" things happening right now:

  • Long running select queries like this: SELECT /* ClearUserWatchlistJob::run */ wl_id AS value` FROM watchlist WHERE wl_user = <id> AND (wl_id <= <id2>) ORDER BY wl_id ASC LIMIT 100`
  • Locking/contention on deletes like this: DELETE FROM watchlist WHERE wl_id IN (<long list>)
  • Locking/contention on deletes like this: DELETE FROM watchlist WHERE wl_id = '<id>'
jcrespo renamed this task from ClearUserWatchlistJob bad database peformance on enwiki, commons, causing database lag? to ClearUserWatchlistJob/WatchedItemStore::removeWatchBatchForUser bad database peformance on enwiki and others, causing database lag.Dec 18 2020, 2:31 PM

To summarize all my findings above, there is 3 "bad" things happening right now:

  • Long running select queries like this: SELECT /* ClearUserWatchlistJob::run */ wl_id AS value` FROM watchlist WHERE wl_user = <id> AND (wl_id <= <id2>) ORDER BY wl_id ASC LIMIT 100`
  • Locking/contention on deletes like this: DELETE FROM watchlist WHERE wl_id IN (<long list>)
  • Locking/contention on deletes like this: DELETE FROM watchlist WHERE wl_id = '<id>'

And the spike of deletes on the master which are like:

DELETE /* WatchedItemStore::updateOrDeleteExpiries  */ `watchlist_expiry` FROM `watchlist_expiry`, `watchlist` WHERE we_item=wl_id  AND ((wl_user = xxx AND wl_namespace = 2 AND wl_title = 'xxx'))

The engineers on CommTech are looking into this right now. This includes @MusikAnimal and @HMonroy and me.

removeWatchBatchForUser() is only called from Special:EditWatchlist, which has known, long-standing performance problems (i.e. T41510). Looking at P13602, the Commons user appears to be a bot that is watching an extremely large number of pages (none of them are temporarily watched, though). The core issue here is most surely T258108. That is, the bot's watchlist inadvertently became extremely large and the operator tried to clear it. The other user mentioned in the paste (from enwiki) appears to be a prolific user completely clearing their watchlist, which is not something that happens very often.

The deletions in ClearUserWatchlistJob are batched and wait for replication. The same is true with WatchedItemStore::updateOrDeleteExpiries (see the caller WatchedItemStore::addWatchBatchForUser). We're not sure how to improve performance here outside lowering $wgUpdateRowsPerQuery?

While this deserves attention, our initial investigation suggests this is a relatively uncommon event and doesn't appear to be an issue with Expiring-Watchlist-Items, specifically.

removeWatchBatchForUser() is only called from Special:EditWatchlist, which has known, long-standing performance problems (i.e. T41510). Looking at P13602, the Commons user appears to be a bot that is watching an extremely large number of pages (none of them are temporarily watched, though). The core issue here is most surely T258108. That is, the bot's watchlist inadvertently became extremely large and the operator tried to clear it. The other user mentioned in the paste (from enwiki) appears to be a prolific user completely clearing their watchlist, which is not something that happens very often.

Is there a way to throttle those deletes? Even if it is not a common issue, it has the potential to cause lag.

The deletions in ClearUserWatchlistJob are batched and wait for replication. The same is true with WatchedItemStore::updateOrDeleteExpiries (see the caller WatchedItemStore::addWatchBatchForUser). We're not sure how to improve performance here outside lowering $wgUpdateRowsPerQuery?

By the name of the function I guess that's the size of the delete batches? Decreasing it with less DELETEs per batch increasing the time between the batches would be a good approach for now if that is doable.

What worries me the most is the huge spike on DELETEs. Deletions on mysql are very expensive, and that spike will certainly cause lag on the slaves and that put a lot more load on the hosts, which might end up piling up connections and having a cascade effect.

Thanks for looking into the issue so fast.

Is there a way to throttle those deletes? Even if it is not a common issue, it has the potential to cause lag.

It's already going through the job queue. Isn't that the normal way to safely throttle things?

By the name of the function I guess that's the size of the delete batches? Decreasing it with less DELETEs per batch increasing the time between the batches would be a good approach for now if that is doable.

$wgUpdateRowsPerQuery is used all over MediaWiki. We can lower it but I don't know what kind of ripple effect that will have on other functionality. Another option is to have a separate (and lower) throttle just for deleting the watchlist, I suppose.

What worries me the most is the huge spike on DELETEs. Deletions on mysql are very expensive, and that spike will certainly cause lag on the slaves and that put a lot more load on the hosts, which might end up piling up connections and having a cascade effect.

I'm guessing the spike was mostly due to the Commons bot. It was watching close to 2 million pages and there was an apparent attempt to clear all of them. Items are deleted at the same rate as $wgUpdateRowsPerQuery (100 I think). The attempt apparently failed, because I see their watchlist is now growing again at a few items per second (the same rate that it is uploading files). This will continue to until someone asks the operator to turn off the auto-watchlist preference or implements T258108. It may require DBA intervention to successfully purge their watchlist due to the size. Related: T258098.

The enwiki user now has 0 items in their watchlist. I don't know how big it was before, but I'm guessing it was a manageable size since the user is human and unlike the bot was actually making use of their watchlist.

I definitely don't see a quick fix, but at the same time I don't have any reason to believe this scenario is likely to happen again beyond circumstance. It's sort of analogous to an admin editing a very highly transcluded template. Doing so can cause millions of jobs to pile up. It can be disruptive, but it's sort of expected. Similarly I don't see how we could avoid a spike in deletions for a user trying to clear ~2 million pages from their watchlist.

At any rate, I'm fairly confident now that the performance problems are not due to the Expiring-Watchlist-Items feature. Sorry, I know it sounds like I'm trying to shift the blame... I just wanted to clarify that it seems this is a larger, preexisting issue with watchlist management.

Is there a way to throttle those deletes? Even if it is not a common issue, it has the potential to cause lag.

It's already going through the job queue. Isn't that the normal way to safely throttle things?

I would have guessed that too yeah

By the name of the function I guess that's the size of the delete batches? Decreasing it with less DELETEs per batch increasing the time between the batches would be a good approach for now if that is doable.

$wgUpdateRowsPerQuery is used all over MediaWiki. We can lower it but I don't know what kind of ripple effect that will have on other functionality. Another option is to have a separate (and lower) throttle just for deleting the watchlist, I suppose.

Ah, I thought it could be adjusted on a per-case basis, if that is not the case, definitely better not to touch it, especially before during the code freeze.

What worries me the most is the huge spike on DELETEs. Deletions on mysql are very expensive, and that spike will certainly cause lag on the slaves and that put a lot more load on the hosts, which might end up piling up connections and having a cascade effect.

I'm guessing the spike was mostly due to the Commons bot. It was watching close to 2 million pages and there was an apparent attempt to clear all of them. Items are deleted at the same rate as $wgUpdateRowsPerQuery (100 I think). The attempt apparently failed, because I see their watchlist is now growing again at a few items per second (the same rate that it is uploading files). This will continue to until someone asks the operator to turn off the auto-watchlist preference or implements T258108. It may require DBA intervention to successfully purge their watchlist due to the size. Related: T258098.

Thanks for providing context to this.
It sounds that T258108 definitely needs some priority, as the way we currently have this implemented can cause self-inflected ddos. Given that causing lag on the slaves have nasty consequences in general and generate cascades that might end up with sections being temporarily on read-only mode.
T258098 would be very useful indeed but probably has less priority than T258108.

According to https://www.mediawiki.org/wiki/Developers/Maintainers watchlist is owned by Contributors-Team is that also part of Growth-Team scope?

The enwiki user now has 0 items in their watchlist. I don't know how big it was before, but I'm guessing it was a manageable size since the user is human and unlike the bot was actually making use of their watchlist.

I definitely don't see a quick fix, but at the same time I don't have any reason to believe this scenario is likely to happen again beyond circumstance. It's sort of analogous to an admin editing a very highly transcluded template. Doing so can cause millions of jobs to pile up. It can be disruptive, but it's sort of expected. Similarly I don't see how we could avoid a spike in deletions for a user trying to clear ~2 million pages from their watchlist.

That's good to know.

At any rate, I'm fairly confident now that the performance problems are not due to the Expiring-Watchlist-Items feature. Sorry, I know it sounds like I'm trying to shift the blame... I just wanted to clarify that it seems this is a larger, preexisting issue with watchlist management.

Would your team be able to help with T258108: Ignore auto-watchlist preferences for bots?

I think Tim's idea at T258108#6617792 makes sense and wouldn't be hard to implement. I won't lie, this issue has been bothering me for a while, hehe! :) I think a little bit of community discussion may be needed here first; I will follow up in T258108. From a technical standpoint CommTech can certainly talk about it and we'll have more to report soon.

I see their watchlist is now growing again at a few items per second (the same rate that it is uploading files).

Let me say for the record, aside from this consequence, the bot on Commons is very much a good bot doing good things. On closer examination the uploads are batched to some extent and done synchronously, so I don't think anything is awry behaviourally. It's just a problem that each upload by default needlessly watches a page.

The <id2> in this query:

SELECT /* ClearUserWatchlistJob::run */ wl_id AS `value` FROM `watchlist` WHERE wl_user = <id> AND (wl_id <= <id2>) ORDER BY wl_id ASC LIMIT 100 /* <hash> db1146:3314 commonswiki 41s */

is the maximum of the database table (select MAX(wl_id) from watchlist).
Maybe the database gets crazy of this fact that wl_id possible is fare away from the currently max wl_id for the specific user?

Would it help to select the maximum wl_id on the user to make the index choose better?

On my developer mysql it does not help according to some explains (It swap from key PRIMARY to wl_user_notificationtimestamp and gets a filesort).

Maybe it is better to select in DESC order and use the last id as new max value for the next job? Than each job gets a smaller number and the database can try to choose the best index for that query even with filesort

On the other hand, why needs a ASC order? Just wants to delete all, so gets the next batch and delete without old to new or new to old

It would help if can come up with a SELECT similar to the original ones

SELECT /* ClearUserWatchlistJob::run */ wl_id AS `value` FROM `watchlist` WHERE wl_user = <id> AND (wl_id <= <id2>) ORDER BY wl_id ASC LIMIT 100 /* <hash> db1146:3314 commonswiki 41s */

Can someone come up with some values for wl_user and wl_id and send them to me privately that have enough items on their watchlist? So we can test a SELECT with them and see what the query optimizer does in regards to indexes and such?

It would help if can come up with a SELECT similar to the original ones

SELECT /* ClearUserWatchlistJob::run */ wl_id AS `value` FROM `watchlist` WHERE wl_user = <id> AND (wl_id <= <id2>) ORDER BY wl_id ASC LIMIT 100 /* <hash> db1146:3314 commonswiki 41s */

Can someone come up with some values for wl_user and wl_id and send them to me privately that have enough items on their watchlist? So we can test a SELECT with them and see what the query optimizer does in regards to indexes and such?

I added a more recent example at P13602#75060.

Thank you - let's chat there about the values and once we've found some examples, we can continue the discussion here about the query plans and such

Thanks @MusikAnimal for providing values to build up a query.
From what I can see, the query plan is quite messed up and it ends up scanning half of the table:

[email protected][(none)]> show explain for 1474624886;
 ------ ------------- ----------- ------- ----------------------------------------------- --------- --------- ------ ---------- ------------- 
| id   | select_type | table     | type  | possible_keys                                 | key     | key_len | ref  | rows     | Extra       |
 ------ ------------- ----------- ------- ----------------------------------------------- --------- --------- ------ ---------- ------------- 
|    1 | SIMPLE      | watchlist | range | PRIMARY,wl_user,wl_user_notificationtimestamp | PRIMARY | 4       | NULL | 93218872 | Using where |
 ------ ------------- ----------- ------- ----------------------------------------------- --------- --------- ------ ---------- ------------- 
1 row in set, 1 warning (0.001 sec)

That query takes around 2 minutes in production.
A quick check kinda shows that using wl_user index would make a bit more sense even if it filesorts:

[email protected][commonswiki]> explain SELECT /* ClearUserWatchlistJob::run */ wl_id AS `value` FROM `watchlist` USE INDEX(wl_user) WHERE wl_user = x AND (wl_id <= x) ORDER BY wl_id ASC LIMIT 100;
 ------ ------------- ----------- ------ --------------- --------- --------- ------- --------- ------------------------------------------ 
| id   | select_type | table     | type | possible_keys | key     | key_len | ref   | rows    | Extra                                    |
 ------ ------------- ----------- ------ --------------- --------- --------- ------- --------- ------------------------------------------ 
|    1 | SIMPLE      | watchlist | ref  | wl_user       | wl_user | 4       | const | 4043264 | Using where; Using index; Using filesort |
 ------ ------------- ----------- ------ --------------- --------- --------- ------- --------- ------------------------------------------ 
1 row in set (0.001 sec)
[email protected][commonswiki]> SELECT /* ClearUserWatchlistJob::run */ wl_id AS `value` FROM `watchlist` USE INDEX(wl_user) WHERE wl_user = x AND (wl_id <= x) ORDER BY wl_id ASC LIMIT 100;
<snip>
100 rows in set (5.674 sec)

Checking the query plan across all the hosts in s4, to discard specific host issues, they all use PRIMARY as the chosen key.
Trying to refresh the table stats doesn't result on a query plan change.

And finally, this is the query optimizer trace. It is interesting to see how using wl_user has a best cost and it is actually shown there, but it looks like it decides to use the PK because of the ORDER BY (without the ORDER BY it runs instantly. I have redacted the values for the query

*************************** 1. row ***************************
                            QUERY: ANALYZE FORMAT=JSON SELECT  wl_id AS `value` FROM `watchlist` WHERE wl_user = x AND (wl_id <= x) ORDER BY wl_id ASC LIMIT 100
                            TRACE: {
  "steps": [
    {
      "join_preparation": {
        "select_id": 1,
        "steps": [
          {
            "expanded_query": "select watchlist.wl_id AS `value` from watchlist where watchlist.wl_user = x and watchlist.wl_id <= x order by watchlist.wl_id limit 100"
          }
        ]
      }
    },
    {
      "join_optimization": {
        "select_id": 1,
        "steps": [
          {
            "condition_processing": {
              "condition": "WHERE",
              "original_condition": "watchlist.wl_user = x and watchlist.wl_id <= x",
              "steps": [
                {
                  "transformation": "equality_propagation",
                  "resulting_condition": "watchlist.wl_id <= x and multiple equal(x, watchlist.wl_user)"
                },
                {
                  "transformation": "constant_propagation",
                  "resulting_condition": "watchlist.wl_id <= x and multiple equal(x, watchlist.wl_user)"
                },
                {
                  "transformation": "trivial_condition_removal",
                  "resulting_condition": "watchlist.wl_id <= x and multiple equal(x, watchlist.wl_user)"
                }
              ]
            }
          },
          {
            "table_dependencies": [
              {
                "table": "watchlist",
                "row_may_be_null": false,
                "map_bit": 0,
                "depends_on_map_bits": []
              }
            ]
          },
          {
            "ref_optimizer_key_uses": [
              {
                "table": "watchlist",
                "field": "wl_user",
                "equals": "x",
                "null_rejecting": false
              },
              {
                "table": "watchlist",
                "field": "wl_user",
                "equals": "x",
                "null_rejecting": false
              }
            ]
          },
          {
            "rows_estimation": [
              {
                "table": "watchlist",
                "range_analysis": {
                  "table_scan": {
                    "rows": x,
                    "cost": 3.87e7
                  },
                  "potential_range_indexes": [
                    {
                      "index": "PRIMARY",
                      "usable": true,
                      "key_parts": ["wl_id"]
                    },
                    {
                      "index": "wl_user",
                      "usable": true,
                      "key_parts": ["wl_user", "wl_namespace", "wl_title"]
                    },
                    {
                      "index": "wl_user_notificationtimestamp",
                      "usable": true,
                      "key_parts": ["wl_user", "wl_notificationtimestamp", "wl_id"]
                    },
                    {
                      "index": "wl_namespace_title",
                      "usable": false,
                      "cause": "not applicable"
                    }
                  ],
                  "best_covering_index_scan": {
                    "index": "wl_user_notificationtimestamp",
                    "cost": 9.95e6,
                    "chosen": true
                  },
                  "setup_range_conditions": [],
                  "group_index_range": {
                    "chosen": false,
                    "cause": "no group by or distinct"
                  },
                  "analyzing_range_alternatives": {
                    "range_scan_alternatives": [
                      {
                        "index": "PRIMARY",
                        "ranges": ["(wl_id) <= (x)"],
                        "rowid_ordered": true,
                        "using_mrr": false,
                        "index_only": false,
                        "rows": 93219254,
                        "cost": 1.88e7,
                        "chosen": false,
                        "cause": "cost"
                      },
                      {
                        "index": "wl_user",
                        "ranges": ["(x) <= (wl_user) <= (x)"],
                        "rowid_ordered": false,
                        "using_mrr": false,
                        "index_only": true,
                        "rows": 4043264,
                        "cost": 303337,
                        "chosen": true
                      },
                      {
                        "index": "wl_user_notificationtimestamp",
                        "ranges": ["(8609812) <= (wl_user) <= (x)"],
                        "rowid_ordered": false,
                        "using_mrr": false,
                        "index_only": true,
                        "rows": 3851938,
                        "cost": 201602,
                        "chosen": true
                      }
                    ],
                    "analyzing_roworder_intersect": {
                      "cause": "too few roworder scans"
                    },
                    "analyzing_index_merge_union": []
                  },
                  "chosen_range_access_summary": {
                    "range_access_plan": {
                      "type": "range_scan",
                      "index": "wl_user_notificationtimestamp",
                      "rows": 3851938,
                      "ranges": ["(x) <= (wl_user) <= (x)"]
                    },
                    "rows_for_plan": 3851938,
                    "cost_for_plan": 201602,
                    "chosen": true
                  }
                }
              },
              {
                "selectivity_for_indexes": [
                  {
                    "index_name": "PRIMARY",
                    "selectivity_from_index": 0.5
                  },
                  {
                    "index_name": "wl_user",
                    "selectivity_from_index": 0.0217
                  }
                ],
                "selectivity_for_columns": [],
                "cond_selectivity": 0.0108
              }
            ]
          },
          {
            "considered_execution_plans": [
              {
                "plan_prefix": [],
                "table": "watchlist",
                "best_access_path": {
                  "considered_access_paths": [
                    {
                      "access_type": "ref",
                      "index": "wl_user",
                      "used_range_estimates": true,
                      "rows": 4.04e6,
                      "cost": 143704,
                      "chosen": true
                    },
                    {
                      "access_type": "ref",
                      "index": "wl_user_notificationtimestamp",
                      "used_range_estimates": true,
                      "rows": 3.85e6,
                      "cost": 12919,
                      "chosen": true
                    },
                    {
                      "type": "scan",
                      "chosen": false,
                      "cause": "cost"
                    }
                  ],
                  "chosen_access_method": {
                    "type": "ref",
                    "records": 3.85e6,
                    "cost": 12919,
                    "uses_join_buffering": false
                  }
                },
                "rows_for_plan": 3.85e6,
                "cost_for_plan": 783307,
                "selectivity": 0.5248,
                "estimated_join_cardinality": 2.02e6
              }
            ]
          },
          {
            "best_join_order": ["watchlist"]
          },
          {
            "attaching_conditions_to_tables": {
              "original_condition": "watchlist.wl_user = x and watchlist.wl_id <= x",
              "attached_conditions_computation": [],
              "attached_conditions_summary": [
                {
                  "table": "watchlist",
                  "attached": "watchlist.wl_id <= x"
                }
              ]
            }
          },
          {
            "reconsidering_access_paths_for_index_ordering": {
              "clause": "ORDER BY",
              "fanout": 1,
              "read_time": 12919,
              "table": "watchlist",
              "rows_estimation": 3851938,
              "possible_keys": [
                {
                  "index": "PRIMARY",
                  "can_resolve_order": true,
                  "updated_limit": 9222,
                  "range_scan_time": 927.68,
                  "index_scan_time": 927.68,
                  "records": 93219254,
                  "chosen": true
                },
                {
                  "index": "wl_user",
                  "can_resolve_order": false,
                  "cause": "order can not be resolved by key"
                },
                {
                  "index": "wl_user_notificationtimestamp",
                  "can_resolve_order": false,
                  "cause": "order can not be resolved by key"
                },
                {
                  "index": "wl_namespace_title",
                  "can_resolve_order": false,
                  "cause": "order can not be resolved by key"
                }
              ]
            }
          },
          {
            "table": "watchlist",
            "range_analysis": {
              "table_scan": {
                "rows": 186438509,
                "cost": 2e308
              },
              "potential_range_indexes": [
                {
                  "index": "PRIMARY",
                  "usable": true,
                  "key_parts": ["wl_id"]
                },
                {
                  "index": "wl_user",
                  "usable": false,
                  "cause": "not applicable"
                },
                {
                  "index": "wl_user_notificationtimestamp",
                  "usable": false,
                  "cause": "not applicable"
                },
                {
                  "index": "wl_namespace_title",
                  "usable": false,
                  "cause": "not applicable"
                }
              ],
              "setup_range_conditions": [],
              "group_index_range": {
                "chosen": false,
                "cause": "no group by or distinct"
              },
              "analyzing_range_alternatives": {
                "range_scan_alternatives": [
                  {
                    "index": "PRIMARY",
                    "ranges": ["(wl_id) <= (x)"],
                    "rowid_ordered": true,
                    "using_mrr": false,
                    "index_only": false,
                    "rows": 93219254,
                    "cost": 1.88e7,
                    "chosen": true
                  }
                ],
                "analyzing_roworder_intersect": {
                  "cause": "too few roworder scans"
                },
                "analyzing_index_merge_union": []
              },
              "chosen_range_access_summary": {
                "range_access_plan": {
                  "type": "range_scan",
                  "index": "PRIMARY",
                  "rows": 93219254,
                  "ranges": ["(wl_id) <= (x)"]
                },
                "rows_for_plan": 93219254,
                "cost_for_plan": 1.88e7,
                "chosen": true
              }
            }
          }
        ]
      }
    },
    {
      "join_execution": {
        "select_id": 1,
        "steps": []
      }
    }
  ]
}
MISSING_BYTES_BEYOND_MAX_MEM_SIZE: 0
          INSUFFICIENT_PRIVILEGES: 0

I am not sure how often this query runs, but if it is taking more than 60 seconds (which it definitely does) it is certainly being killed by the query killer (which kills queries that run with the wikiuser user and that run for more than 60 seconds).
If the ORDER BY is really necessary, then looks like wl_user gives a faster execution time, we could think about using USE INDEX (wl_user).

Change 651574 had a related patch set uploaded (by MusikAnimal; owner: MusikAnimal):
[mediawiki/core@master] ClearUserWatchlistJob: remove ORDER BY causing slow query

https://gerrit.wikimedia.org/r/651574

I don't think there is a need to ORDER BY. This job is meant to remove all watched items for a user, hence it shouldn't matter what order we do that in, so long as they all get deleted. I've added @Addshore to the patch just in case, as they authored the original code with https://gerrit.wikimedia.org/r/c/mediawiki/core/ /277350.

Sorry, I'm afraid the "enwiki user" was me; when the "clear watchlist" button on my oversized watchlist had no visible effect, I ended up doing a "raw watchlist" edit, cleared the field, saved the page and repeated a few times. That actually emptied the watchlist over time; I could see the number of pages decreasing quickly after this.

I have since created https://en.wikipedia.org/wiki/User:ToBeFree/scripts/clear-watchlist.js to deal with this more reliably in the future.

Change 651574 merged by jenkins-bot:
[mediawiki/core@master] ClearUserWatchlistJob: remove ORDER BY causing slow query

https://gerrit.wikimedia.org/r/651574

Sorry, I'm afraid the "enwiki user" was me; when the "clear watchlist" button on my oversized watchlist had no visible effect, I ended up doing a "raw watchlist" edit, cleared the field, saved the page and repeated a few times. That actually emptied the watchlist over time; I could see the number of pages decreasing quickly after this.

I have since created https://en.wikipedia.org/wiki/User:ToBeFree/scripts/clear-watchlist.js to deal with this more reliably in the future.

No need to be sorry! This was a software issue that was not a result of user error. Once the next deployment train arrives on enwiki (scheduled for January 7), I believe the "Clear watchlist" functionality for users with a large watchlist should be fixed.

https://gerrit.wikimedia.org/r/651574 won't decrease the rate of DELETEs, though. I'm not sure how to do that, if that is still believed to be an issue.

I will go ahead and remove Expiring-Watchlist-Items as I don't believe it played a role here. The DELETEs for watchlist_expiry likely affected very few or no rows at all, given the feature hasn't been live for very long.

@MusikAnimal I think the DELETE are still a problem, but thankfully they don't seem to happen very often, so I am happy to close this task once we get the fix for the slow query.

These jobs can really run as slowly as makes us comfortable, we could use smaller batches for these jobs compared to wgUpdateRowsPerQuery if it makes sense (right now wgUpdateRowsPerQuery is respected).
We could also have different behaviours for when we know we will have 10s of thousands of these jobs spawning, vs say 10

I hope the ORDER BY removal had some visible impact?

Was this fix pushed already?

Yes, it should be on all wikis now.

I hope the ORDER BY removal had some visible impact?

As I understand it, this issue only effected users with a very large watchlist. Clearing your watchlist is a pretty rare operation, but especially for prolific users (who usually would rather unwatch pages piecemeal rather than clear it entirely). But from the tests we conducted, the timeouts mentioned at P13602 will not happen anymore with the ORDER BY clause removed, so I believe this particular task can be resolved now.

If/when we attempt T258098, we can use ClearUserWatchlistJob and that will further validate how much removing the ORDER BY helped. It would also be an opportunity to make more refinements like further throttling of the DELETEs. I don't think it's urgent but if the DBAs ever become concerned about the size of the watchlist table, pursuing T258098 would free up a lot of space, at least on the big wikis.

Related: T258108 is also live on all wikis. I checked a few bots and was able to confirm they are no longer auto-watchlisting. Given the rate some of these bots operated, I suspect the overall rate of growth of the watchlist tables on enwiki and commons (in particular) will be noticeably lower now.

Thanks @MusikAnimal - I am going to close this as fixed as all the possible follow-up we can do have their own tasks.
I will keep an eye as T258098: Purge unused watchlist rows can be particularly interesting - even if we are not in running into issues at the moment.

Thanks everyone who helped here!