We've repeatedly had issues because too many pages were being created and/or edited in a short time. We need to limit this to keep the infrastructure sane.
Why is this a problem?
When people create or edit items too fast on Wikidata it causes problems in various parts of the infrastructure as well as strains on the social system of Wikidata:
- dispatching changes to the clients (Wikipedias etc) so they show up there in recent changes and watchlist is delayed
- job queue on the clients gets overloaded due to page purges and reparsings
- the recent changes table on the clients grows too big
- the replication lag between the database servers grows to unacceptable sizes
- the query service misses updates
- assignment of new entity IDs gets locked
- ORES scoring can't keep up
- editors can't keep up with the amount of changes happening and meaningfully review/maintain them
Current monitoring
- replication lag (already taken into account by a lot of tools/bots)
- dispatch lag (often not taken into account yet)
TODO
- limit max page creation rate to 40/min per account
- limit max edit rate to 80/min per account
Next steps
- T192026: Add an API parameter analogue to "maxLag" for dispatch lag (not in scope for this task)