Page MenuHomePhabricator

Migrate MySQLs to use ROW-based replication
Closed, ResolvedPublic

Description

Row based replication has some important advantages:

  • It minimizes in most cases the replication lag (that we suffer in some hosts, sometimes)
  • It minimizes slave drift, that we are suffering now
  • More prone to break replication if a schema or data difference is detected (fails faster)
  • It usually end up with reduced contention, better performance and less locking needed

It has some disadvantages:

  • Increased size of binary logs, that affects both disk space and bandwidth needed. In edge cases (large blobs), it could impact binary logs performance when written to disk (configuration should be tuned)
  • It makes difficult to do one-host-at-a-time schema changes (which is the main mode we do them for mediawiki core hosts right now)
  • More prone to break replication if a schema or data difference is detected (fails faster) - this could be a curse or a blessing
  • Performance may not be great if slaves do not have proper primary keys; it can on the other side improve perforamnce
  • It requires an external tool (mysqlbinlog) in order to know the underlying ongoing queries (for example, if they are stuck)- changes are not shown on show processlist by the system threads
  • Sanitarium requires statement-based binary logs in order to filter rows to labsdbs Not anymore since triggers can happen replica side from 10.1 (row based replication triggers)
  • It makes impossible or more difficult to use some tools like pt-table-checksum, specially on multi-tiered setups (for codfw, for labs)

This ticket is to decide if this change is worth it, how to do it, where (maybe not all servers require it), when and what blockers there are.

Related Objects

Event Timeline

jcrespo raised the priority of this task from to Needs Triage.
jcrespo updated the task description. (Show Details)
jcrespo added subscribers: jcrespo, Springle.
akosiaris triaged this task as Medium priority.
akosiaris subscribed.

Assigning to @jcrespo as the man for the job

I would like to do a full-scale test of this feature on codfw to validate it and test its configuration (sadly binlog_row_image is not available until 10.1.6).

Change 256660 had a related patch set uploaded (by Jcrespo):
Add additional variable binlog_format that can be used on templates

https://gerrit.wikimedia.org/r/256660

Change 256660 merged by Jcrespo:
Add additional variable binlog_format that can be used on templates

https://gerrit.wikimedia.org/r/256660

Enwiki on codfw is now using ROW-based replication as a test, to check regressions and compare its performance to mixed (mostly, statement) on eqiad.

Should we continue doing ROW performance testing on codfw at the same time than T121207 is worked on?

Phabricator_maintenance renamed this task from Migrate MySQLs to use ROW-based replication (tracking) to Migrate MySQLs to use ROW-based replication.Aug 14 2016, 12:20 AM

What's the status of this task? The previous comment is from over a year ago.

This is an Epic task and quite hard to achieve in short or even medium term.
To give you an example, row based replication is quite strict with data drifts and can break replication if data isn't exactly the same on all the tables.
We are right now in process of checking out all the data across the shards (running pt-table-checksum and some in house scripts) to evaluate (and fix) any data drifts we have.
The new labs infra does use row based replication.

There are other things we need to evaluate when slowly moving things to row based replication such as binlog sizes, potential regression on replication speed (specially for big data changes) etc

@MZMcBride Note this doesn't depend on us DBAs- changing to row based replication is a one-time change that is instantaneous. However, the application has to work well with it. Mediawiki developers have so far expressed no interest on it as production itself works more or less ok for them, you can see by no comment here by mediawiki hackers. There is nothing we can do without that. Please help us convincing the application developer this is worth seeking.

However, what we can do now is creating an intermediate replica that is in row-based and make labs use row based replication exclusively. That has already happened (it is not allegedly, it is real), and it is working, as it can be seen on the newly setup servers under the temporary dns labsdb-analytics.eqiad.wmnet and labsdb-web.eqiad.wmnet which, as far as I can see, they have been maintained in sync with production from several months already. You can use them already for testing.

Sorry, I misunderstood the scope of this task. I thought this task was about Wikimedia Labs using row-based replication, not Wikimedia production. I think I'm actually looking for T138967: Labs database replica drift.

However, what we can do now is creating an intermediate replica that is in row-based and make labs use row based replication exclusively. That has already happened (it is not allegedly, it is real), and it is working, as it can be seen on the newly setup servers under the temporary dns labsdb-analytics.eqiad.wmnet and labsdb-web.eqiad.wmnet which, as far as I can see, they have been maintained in sync with production from several months already. You can use them already for testing.

In the context of Wikimedia Labs, the word testing is confusing to me. Isn't all of Labs for testing? It's nice to hear that the data integrity issues we've been having on Wikimedia Labs might soon be resolved.

In the context of Wikimedia Labs, the word testing is confusing to me. Isn't all of Labs for testing? It's nice to hear that the data integrity issues we've been having on Wikimedia Labs might soon be resolved.

This is a pretty serious threadjack, but no, all of Labs/Cloud Services isn't for testing. The deployment-prep project is for testing MediaWiki and related services. The contint project is for running Jenkins workers that perform tests for code. Tool Labs/Toolforge and many other projects are the only "production" environment for the services that they supply. At least one academic paper has been written about the on-wiki impact of tools being off-line. We try very hard not to break things in the infrastructure for this reason. The replica databases are very much treated as services that should be available and stable as much as possible. The common association of 'labs' and 'experiments' is one of the drivers for rebranding Wikimedia Labs as a suite of products under the Wikimedia Cloud Services team.

I don't think migrating to ROW is something we can actually do now after seeing the breakage caused when s5 master died (T180714) and there was a schema change on-going, and the replacement host was a host that was running row based replication.

Having ROW on the master makes some concrete schema changes (ie: adding a column) impossible unless they are done directly on the master and letting it replicate through replication.

We believe that since s5 was accidentally migrated to ROW, the lag is improved; so it did on labsdbs despite not having any kind of replication control, unlike production.

This is important, but not a goal for this quarter- we are still blocked on mediawiki extension maintainers to be compatible with it; however, all databases (misc, x1, parsercache, es) have been meanwhile migrated to ROW already with great success.

There are some issues to solve regarding schema changes, but this is still a desirable change; at least to have the option, even if we go for MIXED or STATEMENT.

This is important, but not a goal for this quarter- we are still blocked on mediawiki extension maintainers to be compatible with it; however, all databases (misc, x1, parsercache, es) have been meanwhile migrated to ROW already with great success.

There are some issues to solve regarding schema changes, but this is still a desirable change; at least to have the option, even if we go for MIXED or STATEMENT.

We're already on RBR in our 1.19-based environment, so I just wanted to learn more about the outstanding issues blocking this task in case there is something I might be able to help with. If I understand the task graph correctly, the main blockers are tables that do not have a primary key or unique index specified, right? Or is there other work left for application developers? Thanks in advance :)

@TK-999 Please note that this is an infrastructure limitation, which means it is mostly related to Wikimedia servers, not mediawiki. As I see it, our main limitations are:

  • Compatibility for schema changes: While row allows for different schemas between master and replicas, the conditions for that are much more limited (extra columns at the end) than with STATEMENT based replication. Many mediawiki schema changes are not ROW-based "hot" compatible. If your can depool or your replicas or they have limited amount of data, this may not affect you. While there are available synchronous hot schema change tools, most are too costly/dangerous for us.
  • Performance: Some writes are small transactions on STATEMENT but large amount of writes in ROW. Again, if you do not perform such maintenance jobs or operations, or your dataset is smaller, you may not be affected by this.
  • Some limitation in flexibility for some uses: STATEMENT can be converted to ROW, but not the other way- some use cases require STATEMENT replication (e.g. maintenance tools)

As you can see, this is mostly the things mentioned on the summary. We have run metadata servers in ROW by accident and it didn't have large issues- and we use row for all other servers except direct replicas, so there is high likely that mediawiki would work for your use cases, specially if it is smaller and less complex than Wikimedia.

@jcrespo I would like to close this - I don't think this is doable on long-term even, I would even say this is very long-long-long-long term for sX sections. There are many limitations here that I don't think we can resolve entirely in a decent amount of time, mostly

  • Schema changes deployment
  • 100% sure that data is consistent across all the hosts across all the wikis
  • Schema drifts

As you originally created the task, I will leave it up to you.

jcrespo closed this task as Declined.EditedSep 30 2020, 9:37 AM

This ticket is to decide if this change is worth it, how to do it, where (maybe not all servers require it), when and what blockers there are.

In a way this is "done"- it was changed on as many places it could be, specially on cloud where that causes drifts.

So it is in a state between resolved and declined.

jcrespo changed the task status from Declined to Resolved.Sep 30 2020, 9:38 AM