PoolCounter
PoolCounter is a lock manager service written in C. PoolCounter provides mutex-like functionality, with a limited wait queue length. If too many servers try to do the same thing at the same time, the wait queue is "full" and the client application can take a fallback action instead, such as returning a stale cache entry or displaying an error message.
PoolCounter was created for MediaWiki, to prevent site outages due to massive wastage of CPU after a popular article's cache is invalidated (the "Michael Jackson problem"). PoolCounter's semaphore semantics allow MediaWiki to restrict the number of web servers that concurrently parse the same new revision of an article after an edit. PoolCounter has since been put to other uses as well, such as rate limiting for thumbnail scaling requests.
Installation
[edit]Server
[edit]apt install poolcounter
.
See packages.debian.org and packages.ubuntu.com.
Alternatively, you can compile the server from its source in the mediawiki/services/poolcounter
repository, or you can use a standard Redis server instead.
Client
[edit]MediaWiki uses PoolCounter via client written in PHP, configured via $wgPoolCounterConf, which currently supports two kinds of backends:
PoolCounter_Client
- backed by a poolcounter server (used by Wikipedia).PoolCounterRedis
- backed by a Redis server.
The PHP client source is in the /includes/poolcounter
directory of MediaWiki core.
There is also an experimental Python client for the poolcounter server, which is used by Thumbor.
Configure
[edit]To enable the PoolCounter client, enable it via $wgPoolCounterConf and then specify your server's address via $wgPoolCountClientConf
.
The MediaWiki client can dynamically specify the pool size and wait timeouts that the PoolCounter server will use. The PoolCounter server itself does not require configuration.
Architecture
[edit]The server is a single-threaded C program based on libevent. It does not use autoconf, it just has a makefile which is suitable for a normal Linux environment. It currently has no daemonize code, and so is backgrounded by systemd.
In MediaWiki, the client must be a subclass of PoolCounter
and the class holding the application-specific logic must be a subclass of PoolCounterWork
. See Manual:$wgPoolCounterConf#Usage for details.
Protocol
[edit]The network protocol is line-based, with parameters separated by spaces (spaces in parameters are percent-encoded). The client opens a connection, sends a lock acquire command, does the work, sends a lock release command, then closes the connection. The following commands are defined:
- ACQ4ANY <key> <active worker limit> <total worker limit> <timeout>
- This is used to acquire a lock when the client is capable of using the cache entry generated by another process. If the active pool worker limit is exceeded, the server will give a delayed response to this command. When a client completes its work, all processes which are waiting with ACQ4ANY will immediately be woken so that they can read the new cache entry.
- ACQ4ME <key> <active worker limit> <total worker limit> <timeout>
- This is used to acquire a lock when cache sharing is not possible or not applicable, for example when an article rendering request involves a non-default stub threshold . When a lock of this kind is released, only one waiting process will be woken, so as to keep the worker population the same.
- RELEASE
- releases the lock that the client most recently acquired
- STATS [FULL|UPTIME]
- show statistics
The possible responses for ACQ4ANY/ACQ4ME:
- LOCKED
- successfully acquired a lock. Client is expected to do the work, then send RELEASE.
- DONE
- sent to wake up a waiting client
- QUEUE_FULL
- there are more workers than <total worker limit>
- TIMEOUT
- there are more workers than <active worker limit>; no slot was freed up after waiting for <timeout> seconds
- LOCK_HELD
- trying to get a lock when one is already held
For RELEASE:
- NOT_LOCKED
- client does not currently hold any locks
- RELEASED
- lock successfully released
For any command:
- ERROR <message>
Testing
[edit]$ echo 'STATS FULL' | nc -w1 localhost 7531 uptime: 633 days, 15209h 42m 26s total processing time: 85809 days 2086330h 0m 24.000000s average processing time: 0.957994s gained time: 1867 days 44820h 50m 24.000000s waiting time: 390 days 9365h 18m 24.000000s waiting time for me: 389 days 9343h 3m 28.000000s waiting time for anyone: 22h 14m 53.898438s waiting time for good: 520 days 12503h 48m 24.000000s wasted timeout time: 473 days 11375h 2m 44.000000s total_acquired: 7739031655 total_releases: 7736374042 hashtable_entries: 119 processing_workers: 119 waiting_workers: 216 connect_errors: 0 failed_sends: 1 full_queues: 10294544 lock_mismatch: 227 release_mismatch: 0 processed_count: 7739031536
Request tracing in production
[edit]Quickly inspect traffic in production
[edit]On a Mediawiki appserver you can do:
sudo tcpdump -A 'port 7531 and tcp[tcpflags] & tcp-push != 0'
Trivial Wireshark support for the protocol
[edit]The following Lua script is a trivial 'dissector' for Wireshark that simply stringifies the payloads of Poolcounter network packets, so you can then add that as a column displayed in Wireshark's UI:
--[[
Trivial Poolcounter wire protocol dissector.
Simply renders payload as a string field, which can be then
enabled as a column.
Copyright © 2020 Chris Danis & the Wikimedia Foundation
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; either version 2
of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
--]]
poolcounter_protocol = Proto("PoolCounter", "PoolCounter wire protocol")
pc_command_str = ProtoField.string("poolcounter.cmd")
poolcounter_protocol.fields = {pc_command_str}
function poolcounter_protocol.dissector(buffer, pinfo, tree)
length = buffer:len()
if length == 0 then return end
pinfo.cols.protocol = poolcounter_protocol.name
local subtree = tree:add(poolcounter_protocol, buffer(), "PoolCounter protocol data")
subtree:add(pc_command_str, buffer(0,length-1))
end
local tcp_port = DissectorTable.get("tcp.port")
tcp_port:add(7531, poolcounter_protocol)
On modern Linux systems you should be able to save this as ~/.local/lib/wireshark/plugins/poolcounter.lua
and then it will work automatically in either wireshark
or tshark
.
Tracing the execution of certain flavors of requests
[edit]Imagine that you cared about seeing the full conversational 'flow' between PoolCounter and its clients for a certain part of the keyspace -- for our example, we'll use enwiki:SpecialContributions:a:127.0.0.1
.
Since the PoolCounter server's responses (e.g. LOCKED
) don't include the key they're talking about, this isn't trivial to do.
Begin with a packet capture from the timespan you're interested in. You might generate this on a poolcounter host (or on an appserver host you're using for testing) with e.g.
sudo tcpdump tcp port 7531 -c 500000 -w poolcounter.pcap
Then, we'll ask Wireshark to extract the list of its internal TCP stream ID numbers for all requests that match that keyspace:
tshark -r poolcounter.pcap -Y 'poolcounter.cmd contains "enwiki:SpecialContributions:a:127.0.0.1"' -T fields -e tcp.stream | sort | uniq > ids.txt
Once we have that list of IDs, we'll transform it into a Wireshark display filter:
FILTER=$(sed -e 's/^/tcp.stream eq /' -e :a -e 'N;s/\n/ or tcp.stream eq /;ta' ids.txt)
and then use that filter to select all PoolCounter protocol traffic from just those streams in the original packet capture:
tshark -r poolcounter.pcap -Y "poolcounter and ($FILTER)" -e frame.time_relative -e frame.time -e ip.src -e tcp.stream -e poolcounter.cmd -Tfields
Code stewardship
[edit]- Maintained by MediaWiki Platform Team.
- Live chat (IRC): #mediawiki-core connect
- Issue tracker: Phabricator PoolCounter (Report an issue)