Talk:2-satisfiability

Latest comment: 8 years ago by David Eppstein in topic Misrepresentation of Aspvall/Plass/Tarjan

Contradiction

edit

The article SAT solver states:

SAT is easier if the formulas are restricted to those in disjunctive normal form, that is, they are disjunction (OR) of terms, where each term is a conjunction (AND) of literals (possibly negated variables).

which is clearly in direct contradiction with the intro to this page, which insists on CNF! Could someone please fix? linas (talk) 17:36, 31 March 2008 (UTC)Reply

It is not a contradiction. Sat in disjunctive normal form is trivial: each term gives a satisfying assignment. Sat in conjunctive normal form is, in general, hard, but 2-sat is an easier (though not as trivial as DNF) special case. —David Eppstein (talk) 17:40, 31 March 2008 (UTC)Reply

Minor problem with diagram not matching text

edit

The diagram uses variables v0, v1, etc. whereas the formulas use x0, x1, etc. Can the owner of the diagram please change the variables to x rather than v to improve ease of comprehension. I could change the text to v1, v2, etc. but the x's are rather more typical. Ross Fraser (talk) 22:49, 3 August 2008 (UTC)Reply

Done. —David Eppstein (talk) 23:05, 3 August 2008 (UTC)Reply

Error in Random 2-satisfiability instances

edit

The following sentence in the text seems to have its conclusions reversed: "if m/n is fixed as a constant α ≠ 1, the probability of satisfiability tends to a limit as n goes to infinity: if α < 1, the limit is zero, while if α > 1, the limit is one." If α < 1 (fewer clauses than variables) the limit is 1 (almost all are satisfiable) whereas if if α > 1, the limit is zero (almost none are satisfiable". Ross Fraser (talk) 22:50, 3 August 2008 (UTC)Reply

Fixed. Thanks for catching this. —David Eppstein (talk) 23:00, 3 August 2008 (UTC)Reply

1979

edit

Was 2SAT really not known to be in P until 1979?? I'm surprised. 69.111.192.233 (talk) 08:39, 19 November 2010 (UTC)Reply

The algorithms section also cites a different polynomial time algorithm from 1971. But the 1979 one comes first because it's a better way to solve these things: the emphasis in that section is on how to solve it, not so much on putting things into their proper historical order. —David Eppstein (talk) 16:43, 19 November 2010 (UTC)Reply
According to Knuth' The Art of Computer Programming, the observation which was attributed to Aspvall et al. (1979), is due to Melven Krom (1967). I added the corresponding references in the article. Now we hopefully have both, the proper historical order and a succinct explanation of how to solve it. Hermel (talk) 19:50, 19 November 2010 (UTC)Reply
Thanks for finding this. I learned something I didn't already know. —David Eppstein (talk) 20:10, 19 November 2010 (UTC)Reply
Thanks. 69.111.192.233 (talk) 02:14, 20 November 2010 (UTC)Reply
I recently updated the algorithms section to describe in more detail these contributions. Knuth appears to be mistaken: Krom doesn't say anything about strongly connected components. Krom's paper instead uses a technique that is essentially the same as the resolution proof (or transitive closure), which is polynomial but nonlinear, although Krom doesn't say anything explicit about running time. The 1971 paper (Cook) also mentions the same technique and does say that it's polynomial time. And there's also a 1976 paper giving a linear time algorithm that is significantly more complicated than the Aspvall et al one (because it involves interleaving two parallel threads). —David Eppstein (talk) 20:41, 5 February 2011 (UTC)Reply

Proof of Krom's algorithm unclear?

edit
And, if it is consistent, then the formula can be extended by repeatedly adding one clause of the form   or   at a time, preserving consistency at each step, until it includes such a clause for every variable. At each of these extension steps, one of these two clauses may always be added while preserving consistency, for if not then the other clause could be generated using the inference rule. Once all variables have a clause of this form in the formula, a satisfying assignment of all of the variables may be generated by setting a variable   to true if the formula contains the clause   and setting it to false if the formula contains the clause  .

Clearly this shows what the satisfying assignment would be, but it's not really clear that it solves all the original clauses. The paper linked is in a pay-access database, so I can't look up his phrasing. Can anyone help with this? Twin Bird (talk) 20:54, 3 June 2011 (UTC)Reply

That's what the "preserving consistency" part means. But I agree that it doesn't really explain how to find the extending clause. Given the fact that more efficient algorithms than Krom's were developed later, it didn't seem important to describe all the details of the algorithm. —David Eppstein (talk) 21:05, 3 June 2011 (UTC)Reply
How is that what "preserving consistency" means? The formula remains consistent, and it's easily seen that if the new formula has a satisfying assignment, it must be the one that meets all those clauses, but it doesn't show that that assignment actually satisfies it but for the fact that a consistent formula is satisfiable, which is exactly what this part of the paragraph is trying to prove. Twin Bird (talk) 23:17, 3 June 2011 (UTC)Reply
Suppose you follow Krom's procedure and end up with a consistent formula in which, for each variable, there is either a clause (x v x) or (~x v ~x). Then the unique truth assignment determined by these clauses must also satisfy every clause (x v y), for otherwise (if we had the three terms P:(~x v ~x), Q:(x v y), R:(~y v ~y)) applying the inference rule gives QR: (x v ~y), QRQ: (x v x), inconsistent with P. —David Eppstein (talk) 18:17, 4 June 2011 (UTC)Reply
That makes sense, but it's not what's said. All that's said is how to make the clause, and that its consistency is the same, not why it would otherwise reduce to an inconsistent clause. I'd add it myself, but I can't be sure it's what Krom said, not having access to his paper (if only it were a month ago...). And yes, there are better algorithms out there now, but this one seems important for historical reasons, being the earliest seen and the one in use when Cook wrote his paper, and if I had to guess I'd say a similar formulation remains useful to descriptive complexity. Twin Bird (talk) 05:32, 8 June 2011 (UTC)Reply
I don't seem to have access to the original Krom paper any more (I used to), so until I do that's all you're going to be able to get; sorry. —David Eppstein (talk) 05:23, 11 June 2011 (UTC)Reply

GA Review

edit
GA toolbox
Reviewing
This review is transcluded from Talk:2-satisfiability/GA1. The edit link for this section can be used to add comments to the review.

Reviewer: Falcon Kirtaran (talk · contribs) 04:52, 10 October 2016 (UTC)Reply

GA review – see WP:WIAGA for criteria

  1. Is it well written?
    A. The prose is clear and concise, and the spelling and grammar are correct:  
    B. It complies with the manual of style guidelines for lead sections, layout, words to watch, fiction, and list incorporation:  
  2. Is it verifiable with no original research?
    A. It contains a list of all references (sources of information), presented in accordance with the layout style guideline:  
    B. All in-line citations are from reliable sources, including those for direct quotations, statistics, published opinion, counter-intuitive or controversial statements that are challenged or likely to be challenged, and contentious material relating to living persons—science-based articles should follow the scientific citation guidelines:  
    C. It contains no original research:  
    D. It contains no copyright violations nor plagiarism:  
  3. Is it broad in its coverage?
    A. It addresses the main aspects of the topic:  
    B. It stays focused on the topic without going into unnecessary detail (see summary style):  
  4. Is it neutral?
    It represents viewpoints fairly and without editorial bias, giving due weight to each:  
  5. Is it stable?
    It does not change significantly from day to day because of an ongoing edit war or content dispute:  
  6. Is it illustrated, if possible, by images?
    A. Images are tagged with their copyright status, and valid fair use rationales are provided for non-free content:  
    B. Images are relevant to the topic, and have suitable captions:  
  7. Overall:
    Pass or Fail:  
    • Impeccable work! I can't find a single thing wrong with this. My only comment is that it can be a bit of a wall of text at times, and might benefit from the inclusion of a couple more diagrams here and there, especially in the applications section, to help show what is going on. I'm quite happy with this as a GA; good luck at FAC! FalconK (talk) 05:16, 10 October 2016 (UTC)Reply


Misrepresentation of Aspvall/Plass/Tarjan

edit

The WikiPedia article in section "Strongly Connected Components" misrepresents Aspvall/Plass/Tarjan.

In particular, checking satisfiability and (also finding an assignment for the variables) does not require the construction of the condensed graph (which is how this Wikipedia article makes it sound.)

recommend: remove everything from "Construct the condensation ... skew-symmetric" — Preceding unsigned comment added by 2601:5C0:C100:178E:E026:A06E:3FDA:33F3 (talk) 06:12, 31 October 2016 (UTC)Reply

Removing that material would leave the section inconsistent, as it would no longer make sense to say that the components are listed in topological order. What do you propose to replace that material by? Also, I don't think it's problematic to rephrase the algorithm from that source using higher-level concepts such as the condensation; it's still essentially the same algorithm. —David Eppstein (talk) 23:15, 31 October 2016 (UTC)Reply