Jump to content

Community Wishlist Survey 2017/Miscellaneous

From Meta, a Wikimedia project coordination wiki
Miscellaneous
27 proposals, 416 contributors



Get feedback using a yes/no microsurveying

  • Problem: At the moment, it is not possible to have feedback from a majority of people, because:
    • they are not following community discussions boards
    • go to a given page is an effort, and everyone can have very good reasons not to express how they feel about something
    • there is no way to collect their opinion in a given context
    Some examples :
    • to leave feedback about a feature you have to be experienced. Users/readers have to understand the structure of the whole wiki to try to search for a page where they can leave some feedback (find it is not guaranteed).
    • help pages maintainers do their best to write Help pages (so as Editors on pages). But they don't know if those pages are useful for their audience, unless if someone understands there is a Talk tab and leave a message there (hint: it never happens).
    • ...
  • Who would benefit: Anyone, because the cases are multiple:
    • People who don't know they can express their opinion about something or help improving it.
    • People improving stuff, to get direct input about something very specific, on a given context.
    • Help pages writers to create better pages and people looking for more information, to give feedback about the page they are reading and then benefit about that improvement.
    • Developers and users for an easier feedback about a given feature.
    • Editors who want to know if a part of the page they are working on is easy to understand.
    • ...
  • Proposed solution:
    Have a way for anyone to be surveyed about something specific. It can be to say if they have found what they were expecting, how they feel about a given feature...
    It is just a yes/no question. This is the case for some on line documentation, like on Google help pages where you can say if you have found the page helpful. In the case there is a minority of no, a link should be added to point to a topic where people can explain what they were expecting.
  • More comments:
    • That task was first drafted as "have a way to know if people find what they are looking for on Help pages" but has been extended a but to have that yes/no solution as an unified practice
    • Some people may recall the Article feedback tool. That tool was very useful to collect feedback on help pages. That extension was not perfect, the wording used was prompting people to deliver unuseful feedback, and its focus was on articles. Removing comments curation would simplify the task a lot.

Discussion

[edit]
  • This would be super useful but also a lot of work. I imagine the controls would end up of similar complexity to CentralAuth, except that you'd also have to collect/display results. Not really a wishlist level thing IMO. --Tgr (talk) 07:36, 28 November 2017 (UTC)[reply]
    Tgr, it can also be simpler, with a script called by a template, to leave opinions and comments on a sub-page. Maybe like the Support button used on this page, but as an extension. Trizek from FR 10:00, 28 November 2017 (UTC)[reply]
  • Salut Trizek,
    Ça fait un temps que je me dis qu'il faudrait un outil simple permettant d'obtenir du feedback des lecteurs. En effet, la plupart d'entre-eux ne connaissent pas la page de discussion des articles et ne savent pas comment y intervenir. Cependant, comme souligné dans les votes ci-bas, l'article feedback a été un retentissant échec à ce niveau.
    En quoi penses-tu que l'initiative que tu proposes aura un meilleur succès ? Simon Villeneuve 15:48, 29 November 2017 (UTC)
    Salut Simon, je pense qu'un outil plus simple, avec un usage non-systématique et ponctuel aidera a collecter des retours plus pertinents. Également, comme noté plus bas, la manière dont les choses étaient formulées a amené à pas mal de retours inutiles. Bref, un système auquel on répond par oui/non sera a mon avis utile et gagnant. Trizek from FR 20:53, 30 November 2017 (UTC)[reply]
  • Very good tool to have, indeed. Two comments-suggestions. •(1) As a user, I am always pleased to give my opinion, especially in a quick and easy manner. And therefore, I find a yes-no question proposition, a clever feature... as long as I am to reply "yes"; if it's a "no" for me, I feel frustrated if I can't explain my disproval or disagreement, since there are so many ways to not be "aligned" with a given solution (the one stated by the question) and only one to agree (welcome in our Ā-world (null-A)). This means I can't really participate to correction or improvment, and I feel this "no" like a useless "bark" ; simply adding the possibility to add a (discrete) link toward a discussion page or something equivalent, eventualy more user-friendly, in order to collect a few words of explanation, comment, growl, or of gratefulness, why not, would be fine ; a click toward a new window/dialog/ is not a puzzle when you are ready to type few words. •(2) As a potential user of the tool, making it a bit more universal, such as allowing for a quiz, a multiple choice question, would be really great, probably with not a great supplementary effort. Examples: completing the yes-no alternative by one or several (not semantically equivalent) choices such as "maybe", "no_opinion", "unconcerned", or a graduated answer such as "positively yes"-"yes"-"mostly agreing"-"balanced, undecided"-"mostly against"-"non"-"strongly no", or a "0 to 10" scale of agreement (but programing a slider is quite different than just a bunch of radio buttons, i guess). Whatever the answer, a level-0 tool, simple "yes-or-no" alternative, will be a great addition. --Eric.LEWIN (talk) 01:35, 30 November 2017 (UTC)[reply]
    Thanks Eric.LEWIN! Your to have a link to a talk page where you can expand what you think of the feature and explain your vote is definitely something to consider into the product definition. Have a dialog input may encounter the same problems the article feedback tool has, with usefulness comments. I also like your idea of having multiple choices, or a scale; that would be nice! Maybe for an iteration? Yes/No would be great as a first step that can go beyond. Thanks! Trizek from FR 20:53, 30 November 2017 (UTC)[reply]
  • There are a few hurdles with this onde, even if it is quite nice. There are even research papers about it! First, there must be a cost with a system like this, otherwise it turns into a like-system. Then the users must be allowed to vote on a scale to express how they feel about the question. Because different people express their feelings differently the votes must be normalized somehow. Lastly the scales are for different dimensions, which might be overlapping or duplicated, so they must be folded into a lower dimension to make sense. This folding to a lower dimensional space is non-trivial. (Yes you can use PCA, but it will most likely create a mess.) — Jeblad 00:02, 11 December 2017 (UTC)[reply]

Voting

[edit]

Word count on statistics

  • Problem: We don't have an actual word count since 2014, and this is a basic statistic to calculate Wikipedia's size
  • Who would benefit: Statistic-lovers and everyone who want to show the size of Wikipedia
  • Proposed solution: Having a word count from the dump would be the solution
  • More comments:

Discussion

[edit]

This is relatively straight forward, we already have the per-article word counts broken out (they are in search results), there just isn't a public way to ask for a sum. FWIW a sum on en.wikipedia.org content index currently reports: 3.049711774E9 EBernhardson (WMF) (talk) 03:17, 18 November 2017 (UTC)[reply]

Where did you find this number? -Theklan (talk) 17:57, 20 November 2017 (UTC)[reply]
I wrote a custom query against the elasticsearch cluster to aggregate the stored word count (as I'm a developer working on search at WMF). I've put up a patch in code review to integrate this into Special:Statistics. I would expect this to be merged and roll out sometime in December. This is only the raw word count of pages considered articles, not any of the more advanced things discussed below. EBernhardson (WMF) (talk) 19:09, 28 November 2017 (UTC)[reply]
@Theklan: This has now rolled out to all wiki's, you can get the counts from the Special:Statistics page. 2601:648:8402:C015:307E:5334:1490:C6B9 19:09, 15 December 2017 (UTC)[reply]
@EBernhardson (WMF): Are you sure that this number is correct? The number was considerably higher in 2014 according to Wikistats. -Theklan (talk) 00:57, 16 December 2017 (UTC)[reply]
@Theklan: Wikistats may have been calculating something different, would have to dig into what they counted. This particular count takes the content (main namespace), removes some non-content portions (tables, hatnote's, etc) and then counts the number of individual words (as determined by tokenization with lucene, the same used for full text search). If we were to include non-content pages the value would increase from 3.1 billion to 11.3 billion. EBernhardson (WMF) (talk) 17:58, 17 January 2018 (UTC)[reply]

I would suggest taking this further with basic readability statistics. there are various well-established metrics, but even simple things like average words-per-sentence and syllables-per-word would be helpful. T.Shafee(Evo﹠Evo)talk 11:02, 18 November 2017 (UTC)[reply]

Readability metrics are misleading and bullshit. Source: I built one. --Dispenser (talk) 18:03, 20 November 2017 (UTC)[reply]

Note: This idea was also suggested at wikitech-l a few days ago, and a reply pointed out a userscript that does a very simple version. Quiddity (WMF) (talk) 19:52, 20 November 2017 (UTC)[reply]

User:Dr pda made a byte and word counter years back and lists issues with counting "article text". The reason why people like word count is "100 words = 1 minute of reading" (without regard to textual difficulty). Naturally excludes infoboxes, tables, images, navboxes, etc. --Dispenser (talk) 21:10, 20 November 2017 (UTC)[reply]
Yes, but I don't want a script that measures the word count of a given article, but the global number of words in the whole Wikipedia project. There's a difference there! -Theklan (talk) 12:03, 21 November 2017 (UTC)[reply]

Voting

[edit]

Responsive CSS/Template Framework for Media Wiki

  • Problem:

Most of the help and meta pages are in Wikipedia and other WikiProjects are cobbled with strange copy'n'paste constructions of Templates, HTML and InlineStyles. This leads to an inconsistent, hard to maintain interface, that fails on many platforms. A lot of the editors maintaining these pages are not programmers. Since there is no simple comprehensible library where they can find all the buttons, boxes, grids and teasers. Most of them copy just what they find on other meta-pages - often without really understanding how it works.

  • Who would benefit:

Readers and Editors alike

  • Proposed solution:
    Create a simple but effective library of CSS-Styles (like Bootstrap) and MediaWiki-Templates, that enables editors to quickly create interfaces that work on all devices and screen sizes and has a consistent look and feel. Possible components are:
    • Buttons
    • Teaser-Boxes
    • Form-Elements
    • A Grid-System
  • More comments:
  • Phabricator tickets:

Discussion

[edit]

If I understand correctly, this would be like T90687, except scoped for the content area, not the software/skin area. That would be a great thing to have, although potentially a lot of work (it would have to take into account different devices, different skins, RTL...) Developers would probably benefit just as much editors/readers as it would be easier to make assumptions about how articles look / wrangle the content to be appropriate for mobile screens.

@Martin Kraft:: The proposal could do with a less handwavy list of use cases IMO (forms and buttons are barely used in wikitext-generated content, and I imagine a grid and boxes would not be the only things to standardize; see also the various subtasks and blockers of T483).

For performance reasons this might depend on TemplateStyles (although if it's not too much overhead we might just prefer to load such a framework on all pages; especially if the original, software-interface-oriented version of T90687 does get done and uses the same rules). --Tgr (WMF) (talk) 02:09, 21 November 2017 (UTC)[reply]

Voting

[edit]

Making Mediawiki MOOC-ready

  • Problem: The Wikiversity missed the massive open online course (MOOC) train. Even the WikiMOOC, which teach how to contribute to Wikipedia, wasn't hosted on Wikiversity. For our movement to gains acceptance in the educational field, and eager of people to learn through MOOC platforms, Mediawiki needs a serious upgrade. This could be in the form of an upgrade to the MW:Extension:Quiz extension.
  • Who would benefit: The Wikiversity communities, of course, but also anyone looking for an inhouse MOOC platform. Plus, more contributors and content on a single linguistic Wikversity version make it far more likely to find benevolent translators through its skilled community and its already well tooled translation environment.
  • Proposed solution:
    • enable users to follow their progress by giving ability to record result of evaluation form
    • provide
      • ease publication of existing courses
      • possibly, way to validate knowledge/skill acquisition
        • online, with some strong identity control and avoidance of least elaborated cheat
        • offline, with tool which ease coordination of exam sessions in dedicated places
    • establishing a list of feature that a MOOC platform must have to be successful is part of this proposal, please feed the proposal
  • More comments:
  • Phabricator tickets:

Discussion

[edit]
  • See also Wikimedia MOOC platform and related Phabricator project for an on-going discussion on this matter. — The preceding unsigned comment was added by Noé (talk)
    This could also just be an improvement of MW:Extension:Quiz to allow the exercises results persistence and consultation. JackPotte (talk) 18:03, 8 November 2017 (UTC)[reply]
    That's indeed one aspect to improve it. But it also should be improved in UX terms, so users might create forms with the VE for example. Also I don't remember if it's already possible to have a larger set of question than the one displayed so the user might face different set of questions each time. Adding a difficulty score to each question, one might also adapt questions to the user previous results. Probably a large bunch of that might be implemented with modules and templates which takes care of all the adaptive behaviour, but data persistence and form edit UX are less likely implementable without dedicated development in the extension. --Psychoslave (talk) 11:12, 9 November 2017 (UTC)[reply]
    The MOOC module has been implemented on Wikiversity in Portuguese, and we are now developing our first course. We have relied on the modules that were released on Wikiversity in English, for instance the course on Web Science. I agree these modules would benefit with some improving. --Joalpe (talk) 03:47, 9 November 2017 (UTC)[reply]
    Thank you I wasn't aware of that. I think that we should spread the word within versions of Wikiversity. --Psychoslave (talk) 11:12, 9 November 2017 (UTC)[reply]
  • I'd like to note that this proposal as it currently stands is very vague. What exact changes are you proposing? The community tech team works on software development, not partnerships with institutions. Is there consensus for what you're asking for? -- NKohli (WMF) (talk) 21:39, 20 November 2017 (UTC)[reply]
@Psychoslave: please see my above comment. -- NKohli (WMF) (talk) 00:24, 22 November 2017 (UTC)[reply]

Hi @NKohli:,

What exact changes are you proposing?
I think that the less vague demand is an improvement of the Quiz extension
  • it should allows users to keep a record of their previous results. The legal team should also take a look at this for privacy consideration I think.
    • A board which enable to have an overview of progress would be fine too.
  • editors should be able to use only visual editor to build quiz forms

For institutional partners, we actually already have some, at least on the French Wikiversity we have courses which were provided by the CNED like Mise en œuvre de l’accessibilité numérique and Convertir une formation existante au format MOOC. So no one seems against this kind of partnership. Providing more suited tools will only help to attract more release of courses by institutional structures.

Does this answer your demand? --Psychoslave (talk) 07:56, 22 November 2017 (UTC)[reply]

Yes, sorry for the late reply. You pinged the wrong person. I'll update the proposal description a bit according to your revised version. Thanks. -- NKohli (WMF) (talk) 23:38, 27 November 2017 (UTC)[reply]

Feels like Not Invented Here syndrome. Surely it is easier and more productive to fix an exiting, high-quality MOOC framework to use MediaWiki authentication, design and stats, than to implement some half-baked MOOC functionality in an extension... --Tgr (talk) 08:24, 28 November 2017 (UTC)[reply]

Voting

[edit]

Add filters to history pages

  • Problem: On some of the high traffic pages, due to volume of activity, it is difficult to identify true authors or find other editing patterns.
  • Who would benefit: Admins and editors investigating page histories.
  • Proposed solution: Add ability to filter/sort page histories, by for example:
    • Show/hide IP edits
    • Show/hide reverted edits and their reverts
    • Show/hide banned user edits
    • Show/hide bot edits
    • Show/hide minor edits
    • Show/hide edits by number of bytes added/subtracted
    • Show/hide my edits – definitely a necessity
    • Show/hide selected user's edits (if this is deemed uncontroversial)
    • Show/hide deleted edits (for administrators only?)
  • More comments:

Discussion

[edit]
  • Nice idea.
    • I had similar plans in background, since that wish came up once in a year in German Wikipedia.
    • There was no broad cry for such a feature.
    • I already mantain w:en:User:PerfektesChaos/js/listPageOptions modifying watchlists and “recent changes”, and I might extend that to history pages with similar options a watchlist already offers. Or mw:User:PerfektesChaos/js/resultListSort which is sorting about 30 special pages.
    • What is a “banned user edit”?
    • “reverts” are edits as any other; they bear no special mark and only full rollback uses a project dependant summary. Simple reverts offer editable summary.
    • IP / registered user, user him/herself edit, bot edit, minor edit, number of bytes less greater than, no summary, personal list of suspicious/interesting users (will need to be stored outside public pages for privacy reasons) – those may be subject to be shown or hidden; or, more likely, to be sorted, showing interesting things in one block together.
    • If this wish is not picked up, I ponder if and how I might implement this.
Greetings --PerfektesChaos (talk) 11:00, 16 November 2017 (UTC)[reply]

Added some possible filter options. --Vachovec1 (talk) 22:43, 18 November 2017 (UTC)[reply]

Ad PerfectedChaos comments:

  • Reverts: what? "Normal" revert (clicking on "undid" in diff interface) has editable summary, sure, but if not completely overwritten, the summary every times begins with words like "Undid revision (number) by (user)" (for English) or similar predefined sequence for other languages. But you probably can't indentify reverts made with "save this old version of page" method.
  • Banned user: I can imagine A) currently blocked user or B) user marked with template en:Template:Banned user (or with something similar).

--Vachovec1 (talk) 22:43, 18 November 2017 (UTC)[reply]

Introducing the edit filters already used elsewhere seems like a logical step UX-wise (although not sure how well the database would cope for large articles without major changes to our infrastructure). For reverts, see also T152434. --Tgr (WMF) (talk) 23:46, 18 November 2017 (UTC)[reply]

There is also c:MediaWiki:Gadget-rightsfilter.js. Helder 23:38, 29 November 2017 (UTC)[reply]
  • The essential tool I always needed and didn't realise it was missing untill this poll. Would really save a lot of time. Very useful for COIN, SPI, and research into other persistent disruption. Up till now I have to copy an entire page history into a regex propgram and do it from there (and I'm not a regex or a Quarry expert like much of Wikipedia expects every normal user and admin to be. Kudpung (talk) 20:51, 6 December 2017 (UTC)[reply]
  • The efficient way to identify reverts is by using the digest. Anyway, the history should be collapsed when a revert is detected. It should also be collapsed for consecutive edits. Note also that "to identify true authors" is extremely difficult. What is a true editor. — Jeblad 01:09, 11 December 2017 (UTC)[reply]

Voting

[edit]

Overhaul spam-blacklist

  • Problem: The current blacklist system is archaic; it does not allow for levels of blacklisting, is confusing to editors. Main problems include that the spam blacklist is indiscriminate of namespace (an often re-occurring comment is that it should be possible to discuss about a link in talkspaces, though not to use it in content namespaces). The blacklist is a black-and-white choice, allowing additions by only non-autoconfirmed editors, or only by admins is not possible. Also giving warnings is not possible (on en.wikipedia, we implemented XLinkBot, who reverts and warns - giving a warning to IPs and 'new' editors that a certain link is in violation of policies/guidelines would be a less bitey solution).
  • Who would benefit: The community at large
  • Proposed solution: Basically, replace the current mw:Extension:SpamBlacklist with a new extension based on mw:Extension:AbuseFilter by taking out the 'conditions' parsing from the AbuseFilter and replace it with only parsing regexes matching added external links (technically, the current AbuseFilter is capable of doing what would be needed, except that in this form it is extremely heavyweight to use for the number of regexes that is on the blacklists). Expansions could be added in forms of whitelisting fields, namespace selectors, etc.
expanded solution
The following discussion has been closed. Please do not modify it.
  1. Take the current AbuseFilter, rename it to SpamFilter, take out all the code that interprets the rules ('conditions').
  2. Make 2 fields in replacement for the 'conditions' field:
    • one text field for regexes that block added external links (the blacklist). Can contain many rules (one on each line, like current spam-blacklist).
    • one text field for regexes that override the block (whitelist overriding this blacklist field; that is generally simpler and cleaner than writing a complex regex, not everybody is a specialist on regexes).
  3. Add namespace choice (checkboxes like in search; so one can choose not to blacklist something in one particular namespace, with addition of an 'all', a 'content-namespace only' and 'talk-namespace only'.
    • Some links are fine in discussions but should not be used in mainspace, others are a total nono
    • Some image links are fine in the file-namespace to tell where it came from, but not needed in mainspace
  4. Add user status choice (checkboxes for the different roles, or like the page-protection levels)
    disallow IPs and new users to use a certain link (e.g. to stop spammers from creating socks, while leaving it free to most users).
  5. Leave all the other options:
    • Discussion field for evidence (or better, a talk-page like function)
    • Enabled/disabled/deleted - not needed, turn it off, obsolete then delete
    • 'Flag the edit in the edit filter log' - maybe nice to be able to turn it off, to get rid of the real rubbish that doesn't need to be logged
    • Rate limiting - catch editors that start spamming an otherwise reasonably good link
    • Warn - could be a replacement for en:User:XLinkBot
    • Prevent the action - as is the current blacklist/whitelist function
    • Revoke autoconfirmed - make sure that spammers are caught and checked
    • Tagging - for combining certain rules to be checked by RC patrollers.
    • I would consider to add a button to auto-block editors on certain typical spambot-domains (a function currently taken by one of Anomie's bots on en.wikipedia).

This should overall be much more lightweight than the current AbuseFilter (all it does is regex-testing as the spam-blacklist does, only it has to cycle through maybe thousands of AbuseFilters). One could consider to expand it to have rules blocked or enabled on only certain pages (for heavily abused links that actually should only be used on it's own subject page). Another consideration would be to have a 'custom reply' field, pointing the editor that gets blocked by the filter as to why it was blocked.

Possible expanded features:

  1. block or whitelist links matching regexes on specific pages (disallow linking throughout except for on the subject page)
  2. block or whitelist links matching regexes when added by specific user/IP/IP-range (disallow specific users to use a domain)
  • More comments:
  • Phabricator tickets: task T6459 (where I proposed this earlier)

Discussion

[edit]
  • I agree, the size of the current blacklists is difficult to work with; I would be blacklisting a lot more spam otherwise. A split of the current blacklists is also desired:
  • I still want to see a single, centralized, publicly available, machine readable spam blacklist for all the spammers, bots, black hat SEOs and other lowlifes so that they can be penalized by Google and other search engines. This list must continue to be exported to prevent spam on other websites. Autoblocking is also most useful here.
  • The same goes for URL shorteners and redirects -- this list would also be useful elsewhere. This is one example where the ability to hand out customized error messages (e.g. "hey, you added a URL shortener; use the original URL instead") is useful.

My issue with this (as I have with supposed “spam-fighting”) is that it takes way too much collateral damage both when it comes to users as when it comes to content, many useful sites are blacklisted purely because a user is banned, and if a user gets globally banned the link 🔗 gets globally blacklisted and removed from any Wikimedia property even if it were used as a source 100% of the time, now let's imagine a year or so later someone wants to add content using that same link (which is now called a “spamlink”) this user will be indefinitely banned simply for sourcing content. I think 🤔 that having unsourced content is a larger risk to Wikimedia projects than alleged “spam” has ever been. This is especially worrisome for mobile users (which will inevitably become the largest userbase) as when you're attempting to save an edit it doesn't even warn you why your edit won't save, but simply says “error” so a user might attempt to save it again and then gets blocked for “spamming”. Abuse filters currently don't function 100% accurately, and having editors leave the project forever simply because they attempted to use “the wrong 👎🏻” reference is bonkers. Sent 📩 from my Microsoft Lumia 950 XL with Microsoft Windows 10 Mobile 📱. --Donald Trung (Talk 🤳🏻) (My global lock 😒🌏🔒) (My global unlock 😄🌏🔓) 10:15, 15 November 2017 (UTC)[reply]

Also after a link could be blacklisted someone might attempt to translate a page and get blocked, the potential for collateral damage is very high, how would this "feature" attempt to keep collateral damage to a minimum? --Donald Trung (Talk 🤳🏻) (My global lock 😒🌏🔒) (My global unlock 😄🌏🔓) 10:15, 15 November 2017 (UTC)[reply]
@Donald Trung: that is not going to change, actually, this suggestion is giving more freedom on how to blacklist and whitelist material. The current system is black-and-white, this gives many shades of grey to the blacklisting system. In other words, your comments are related to the current system.
Regarding the second part of your comment - yes, that is intended use of the system, if it is spammed to page one, then translating that page does not make it a good link on the translation (and actually, this situation could actually also be avoided in the new system). --Dirk Beetstra T C (en: U, T) 10:39, 15 November 2017 (UTC)[reply]
  • The blacklist currently prevents us from adding a link to a site, from the article about that site. This is irrational. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 14:03, 15 November 2017 (UTC)[reply]
    • @Pigsonthewing: What do you mean, do I have an unclear sentence? If it is what I think, is that I would like per-article exceptions (though that is a less important feature of it). --Dirk Beetstra T C (en: U, T) 14:29, 15 November 2017 (UTC)[reply]
    • Ah, I think I get it, you are describing a shortcoming of the current system - that is indeed one of the problems (though there are reasons why sometimes we do not want to do that (e.g. malware sites), or where the link gets more broadly blacklisted (we blacklist all of .onion, which is then indeed not linkable on .onion, but also not on subject X whose official website is a .onion .. ). But the obvious cases are there indeed. I would indeed like to have the possibility to blanket whitelist for specific cases, like <subject>.com on <subject> (allowing full (primary) referencing on that single page, it is now sometimes silly that we have to allow for a /about to link to a site on the subject Wikipage to avoid nullifying the blacklist regex, or a whole set of specific whitelistings to allow sourcing on their own page), or on heavily abused sites really allow whitelisting only for a very specific target ('you can only use this link on <subject> and nowhere else'). --Dirk Beetstra T C (en: U, T) 14:35, 15 November 2017 (UTC)[reply]

Or just add an option to AbuseFilter to compare against a regexp list that's on a wikipage. (Would require some thought in that we might want to expose the matching rule in the error message and logs, but otherwise easy.)

More generally, it would be nice if we could standardize on AbuseFilter instead of having five or six different anti-abuse systems with fractured UX and capabilities. That's a bit beyond CommTech's scope though. --Tgr (WMF) (talk) 23:54, 18 November 2017 (UTC)[reply]

No, User:Tgr (WMF), using the current AbuseFilter for this is going to be a massive overload of the servers, it will still interpret the whole rule and we would probably have hundreds if not thousands of separate filters for this. It also would not allow for whitelisting (unless, again, you write a full rule with even more overload), namespace exclusion (unless ..), user-level exclusion (unless ..).
Making the AbuseFilter more modular may be an idea .. please read my suggestions above as a detailed request for capabilities. I am not familiar with the coding of the AbuseFilter to see how far this would need to go. --Dirk Beetstra T C (en: U, T) 11:00, 20 November 2017 (UTC)[reply]

Voting

[edit]

Allow filtering of recent changes and user contributions by whether they have been reverted or superseded

  • Problem: Vandalism fighting in Wikidata is tougher than in other WMF projects because edits tend to be small and numerous. The Recent Changes page has plenty of filters to focus in on things like unpatrolled changes and whether the change is still the "latest version". However, if a piece of vandalism is not the "latest version" there is no way to tell if it has already been reverted or not, leading to unnecessary duplication of effort by users trying to fight vandals.
  • Who would benefit: All wikidata users would benefit from better vandalism-fighting. Those who work on patrolling would have a much easier job.
  • Proposed solution: When an "undo" action is taken on an edit, that should be indicated in Recent Changes and user contributions, and filterable. When a "restore" action is done to an earlier version than the edit, that should similarly be indicated and filterable (the same indicator would be fine). Similarly for rollbacks. Ideally any subsequent edit that deletes or changes the value of a statement (if that was what the edit was) or the label or description (if the edit was to a label or description) or sitelink (similarly) would also show that the original edit action was overridden.
  • More comments:
  • Phabricator tickets:

Discussion

[edit]

I don't see why this should be a Wikidata specific filter. Other projects might also benefit from being able to filter out reverted edits. ChristianKl (talk) 16:43, 9 November 2017 (UTC)[reply]

Yes I do think it would be a generally useful filter, but particularly useful with wikidata given the quantity of edits we have to deal with. Also the "superceded" portion of this is wikidata-specific (it's hard to judge on a general wiki page whether a damaging edit has just been replaced instead of an editor using 'undo' or 'restore', but in principle it could be done in wikidata). ArthurPSmith (talk) 18:14, 9 November 2017 (UTC)[reply]
I think in many cases it would be possible to judge also on Wikipedia that an edit is undone automatically. ChristianKl (talk) 20:16, 15 November 2017 (UTC)[reply]

ArthurPSmith: This is a good proposal, thanks for posting it. I think it would work for other projects as well as Wikidata, so I'm going to move it into the Miscellaneous category. Let me know if you think there's a different category where you think it would fit best. Thanks! -- DannyH (WMF) (talk) 18:21, 21 November 2017 (UTC)[reply]

Voting

[edit]

Filter user contribution page by number of bytes changed

  • Problem: It is difficult to identify "major" contributions of an editor, particularly if they are prolific editors and their edit history spans years and thousands of edits.
  • Who would benefit: Admins and editors reviewing user's contributions.
  • Proposed solution: Add a filter on user contribution pages to filter by the changes by number of bytes added/deleted. That way, for example, one could get a listing of user's contributions to the main space where they added more that 500 bytes.
  • More comments:
  • Phabricator tickets:

Discussion

[edit]
  • My tool mw:User:PerfektesChaos/js/resultListSort already offers sorting by page name/title of contributions.
    • As soon I find time I will extend this to sort contribution page alternatively back by date-time, by size, by summary.
    • Sorting is nearly the same as filtering but leaves the smaller number below. The page will be kept locally, but changes entry order as often as desired.
    • Note that this would work on existing result, not retrieving results from the server matching a condition.
    • There is already a filtering form, by namespace, by date, by minor, by current, by new accounts, by page creation. That might be extended by min/max size.
Greetings --PerfektesChaos (talk) 10:15, 16 November 2017 (UTC)[reply]
There is also c:MediaWiki:Gadget-rightsfilter.js. Helder 23:39, 29 November 2017 (UTC)[reply]

Voting

[edit]

Provide a tool to efficiently analyze the usage of a template

  • Problem: Like last year, again I want to raise your attention to the fact that working with often used templates and making changes to them is a mess. Why? There is no tool or handy way to get to grips how the template has actually been used and which options one has to consider (or which pages where the template is used need to be edited) when rewriting a template. The tool https://tools.wmflabs.org/templatetiger/ written by User:Kolossos has been helpful for many years, but instead of providing live information it is based on dumps (most half a year old, some two years or even more), and there is no interface (you need to know how to manipulate the URL to filter the information). As someone wrote in last year's survey: While I have big respect to Kolossos' instrument, it's just not enough.
  • Who would benefit: Primarily users who curate and amend templates, secondarily authors who use templates in their articles
  • Proposed solution: Don't know if it is more likely to get Kolossos' tool improved or to get a whole new tool. Solutions that I'd like to see anyway:
    • For the timeliness of data: It'd be nice and a good start if there were at least a monthly update / a monthly dump that reliably gets fed into the tool. Having live data, of course, would be even more helpful.
    • Improving UX and usability: Please provide some interface to facilitate for example searching for a certain text in a certain template parameter, make the table sortable by mouseclick. The dream solution is an interface like the one we know from petscan.
  • More comments:

Discussion

[edit]

Voting

[edit]

Vertical writing support

  • Problem: Scripts that are written vertically are not supported by Wikipedia interface.
  • Who would benefit:
    Users of language with vertically written script, include:
    1. American Sign Language SignWriting users and the ASL test wiki on incubator. The vertical writing support will also allow the ASL wikipedia to be actually created.
    2. Traditional Mongolian Script users, including general Mongolian user in Inner Mongolia as well as various different situation the could be used by usersin Republic of Mongolia.
    3. Historical scripts like Manchu, Tangut, Meroitic Monumental Hieroglyphic are also vertically written, supporting vertical writing will allow them to be more easily inputted into wikisource.
    4. Some Chinese/Japanese users might also prefer reading content in vertical writing direction.
  • Proposed solution: Support vertical writing direction in Mediawiki.
  • More comments: See also mw:Requests for comment/Vertical writing support.
    The ASL incubator wiki have already implemented their own custom way to try to display vertical writing onto their site.
    Some Traditional Mongolian script users for Mongolian language have also started their own mediawiki site that have already implemented their own method to support vertical writing onto their site.
  • Phabricator tickets: phab:T353, phab:T11436
  • Proposer: C933103 (talk) 06:56, 7 November 2017 (UTC)[reply]

Discussion

[edit]

Voting

[edit]
  • Problem: Link to not existing page in the same wiki is marked red. But link to not existing page on Commons, Wikisource etc. is not red and there is only one way how to recognize it - click on it..
  • Who would benefit: All editors adding links to other wikis (like {{commonscat}}), readers, maintenace editors.
  • Proposed solution: Now exists script, which marks links to pages without Wikidata item. Maybe somethink like that would serve to this request. Another possibility is checking all these links during page rendering, usually there is not more than one or two per page. This link then can have eg. class="dead-wikisource-link" in HTML.
  • More comments:
  • Phabricator tickets:

Discussion

[edit]
  • It should be noted that they are also not "blue" (aka 'exists') links. They are light blue and indicate "external to this wiki". —TheDJ (talkcontribs) 14:29, 15 November 2017 (UTC)[reply]
  • Doing this might be a performance concern, as each wiki linked to would require opening a separate database connection to that wiki's database. There might also be edge cases of pages that don't exist directly in the database but "exist" thanks to an extension, language variants, or something like that. And, of course, it would only work for local wikis unless you went one step worse and did an API query like third-party wikis do when using Commons images. Anomie (talk) 15:01, 15 November 2017 (UTC)[reply]
  • This would probably require some kind of global links table, similar to how GlobalUsage works. Personally I doubt the benefits would be anywhere near the effort required. --Tgr (WMF) (talk) 00:15, 19 November 2017 (UTC)[reply]

Voting

[edit]

Allow 'thanks' notification for a log entry

  • Problem:

Users can not send 'thanks' notifications to one user who made a useful action only shown by a log.

  • Who would benefit:

Registered users.

  • Proposed solution:
  • More comments:

The Phabricator ticket phab:T60485 has been created almost 4 years ago: its development is a big task.

  • Phabricator tickets:

phab:T60485 (and its duplicates phab:T74601, phab:T112483, phab:T139443, phab:T152218...)

Discussion

[edit]

Voting

[edit]

Implement deferred changes

  • Problem: Aside from edits blocked by the edit filter, vandalism and other damaging edits can still be viewed on pages for a short amount of time before they are reverted. According to a 2012 study by the Signpost, around 10% of damaging edits are seen by more than 100 readers, affecting the credibility of Wikipedia. The persistence of vandalism and BLP violations on low traffic biographies of living people is a lingering problem. Despite anti-vandalism bots and semi-automated tools, a substantial proportion of those damaging edits is not identified and reverted in a timely manner. (w:Wikipedia:Deferred changes).
  • Who would benefit:

Readers, as they are less likely to view vandalized pages. Vandal patrollers, who will have more time to revert edits.

  • Proposed solution:

Implement w:Wikipedia:Deferred changes, delaying suspicious edits from being viewed by readers until they have been reviewed by an editor, or reverted, similar to w:Wikipedia:Pending changes. Classification of suspicious edits can be done with edit filters, m:ORES and ClueBot NG's classification system.

  • More comments:

This project has been previously developed mainly by w:User:Cenarium, and has gained near unanimous support (except for one oppose) in a 2016 RfC on enwiki. Development appeared to have been active in December last year, however the project seems to be inactive as no changes have been made since then. Cenarium themselves have not made an edit on the English Wikipedia since April this year.

  • Phabricator tickets:

Tasks

[edit]

Commits

[edit]

The finished commits are struck out.

Basic commits
For notification
For simultaneous use of regular patrol
  • gerrit:328111 Make patrol of reviewed changes optional
  • gerrit:315109 Don't autopatrol autoreviewed users in protection-based configs
For easier reviewing
Required for change tags support
  • gerrit:315344 Change tags support (in FlaggedRevs)
  • gerrit:190656 Allow patrolling of tagged changes with minimalist RC patrol (this adds 'problem' tags)

(copied from w:Wikipedia:Deferred_changes/Implementation)

Discussion

[edit]

Voting

[edit]

Allow additional password recovery methods

  • Problem: Right now the only way to recover your password is via email, while it is not even necessary to save an email address with your user settings at all.
  • Who would benefit:
    • Occasional authors who forgot their password and did not supply an email address or whose email address has changed meanwhile.
    • The Volunteer Response Team that quite frequent gets inquiries for lost passwords and can often only respond with "you will have to create a new account".
  • Proposed solution:
    • Create a password hash that can be saved separate from the email address.
    • Create other recovery methods, e.g. by "secret questions".
  • More comments:
  • Phabricator tickets:

Discussion

[edit]

IMHO "secret questions" make everything more insecure, as finding the answer to "What's the birth name of your mother?" etc. is simple social engineering to break into someone else's account. "Password hashs": w:en:Template:Committed identity might be pretty close to that? Have you considered w:Multi-factor authentication? --AKlapper (WMF) (talk) 20:47, 8 November 2017 (UTC)[reply]

Two-factor authentification... Would a "normal user" (one of those who forget to update their email address in the settings) do that? --Reinhard Kraasch (talk) 21:48, 8 November 2017 (UTC)[reply]
Since the possible "secret questions" are often the same across many different sites, https://xkcd.com/792/ seems relevant too. Anomie (talk) 15:23, 9 November 2017 (UTC)[reply]
Two-factor makes account recovery harder, not easier. --Tgr (WMF) (talk) 04:52, 19 November 2017 (UTC)[reply]
I only read about it in its early days. It was confusing enough to make *everything* harder... I hope it improved. Gotta read about it again someday. - Nabla (talk) 23:32, 1 December 2017 (UTC)[reply]
  • Most of this is easily solvable by just more strongly encouraging people to register and verify their email address. Have you seen those websites where once a year they ask "is this still your email address?". Similar reminders and encouragements can be given. In my opinion not registering an email address should be an active opt-out, not a lazy default situation. —TheDJ (talkcontribs) 20:48, 9 November 2017 (UTC)[reply]
    • That's a good point, sending a reminder to said folks should be pretty easy. And yeah, we should encourage it more heavily on the registration page. Not an a hard failure, but at least a "HEY ARE YOU REALLY F'ING SURE? HAVING AN EMAIL IS A GOOD IDEA YO" would encourage people to not skip out. 😂 (talk) 00:27, 10 November 2017 (UTC)[reply]
    • Maybe specifically when an online email service provide is known to terminating or terminated their service, a reminder can be given to those people? C933103 (talk) 20:04, 11 November 2017 (UTC)[reply]
  • Maybe send a person who doesn't register their email every 3 months a central notice asking them to fill out their email address? ChristianKl (talk) 17:29, 11 November 2017 (UTC)[reply]

A password hash is basically a password, except it's impossible to remember. How would that help? If you care about your account being lost, set an email address and keep it up to date. If someone can't be trusted to do that, it's hard to imagine they would keep better track of their identity hash. 1 to nagging people with significant editcount to set/update their email address instead. (Also, maybe allow setting a secondary recovery email address?) --Tgr (WMF) (talk) 04:52, 19 November 2017 (UTC)[reply]

I'm not a fan of the proposed alternative recovery methods. Perhaps something like adding a phone number might make sense, although that's also not without its flaw in terms of people stealing other people's phone numbers. BWolff (WMF) (talk) 22:49, 28 November 2017 (UTC)[reply]

It is now well established that SMS is not secure enough for 2FA, but using it (or voice calls) for password recovery would be even more dangerous as not even the password would be required to break into an account. Admittedly, intercepting and redirecting messages or calls may be well beyond the abilities of a regular script-kiddie, but that's not the only group of possible attackers. This may in particular put people living in countries with oppressive regimes under especially high risk. Of course, entering a phone number may (and should) be optional, but still not everyone would be aware of the security implications, with many people happily assuming that nobody else should be able to read their text messages or hear their voice calls. Last but not least, by implementing something like this, we'll be going in the exact opposite direction of where everyone else is going nowadays (or should/will sooner or later be going, anyway).
— Luchesar • T/C 23:18, 28 November 2017 (UTC)[reply]
  • Better look into how proofs are done at Keybase. You can use multiple proofs to verify an identity, and if the proofs gives a sufficiently high trust, then revoke of credentials can be initiated. Please don't use SMS, but if you do, ask for an alternate return path. Note also that if an attacker asks for new credentials, then he already has a working attack vector for the special page at Wikipedia. — Jeblad 01:18, 11 December 2017 (UTC)[reply]

Voting

[edit]

Only show mainspace pages in Special:WantedPages

  • Problem: As part of the Special Pages in every wiki, there is a section called Special:WantedPages. There, it lists non-existing pages by number of incoming links, in order to know which of these are the most desired. One problem is that non-articles are shown. The list includes files, Talk pages, project pages, etc., that are not useful to most people.

    On some wikis, when these pages are actually created, they are still shown at Special:WantedPages. This has caused the wanted pages list to become more and more useless over the years.

    Additionally, the places that the incoming links originate are also not necessarily legitimate links. Pages that are mentioned in a template are counted as a link with every template transclusion. It would be more useful to count actual direct links. To make matters worse, pages that are checked with {{#ifexist: are also counted as a link (that is the subject of this proposal). This causes false positives to show up. For instance, at w:en:Special:WantedPages, the top result (at the time of writing) is w:en:Talk:Jay Obernolte/GA1. In reality this is not really a wanted page at all, rather a popular template is checking whether or not it exists.

  • Who would benefit: Users who are looking for articles to write, or are interested to know what popular topics are not covered on the wiki.
  • Proposed solution: Only include mainspace pages, or at least offer options to pick a namespace. Secondly, only direct incoming links from the matching namespace should be counted. Finally, assuming the queries of this Special page can be made more efficient, it should refresh the list more often so that pages that now exist are removed.

Discussion

[edit]

@Pencho15: Could you update the summary of this proposal by summarizing the current issues that you see? "Solve Wanted Pages issues" is a bit vague as it could be any issues... Thanks in advance! --AKlapper (WMF) (talk) 20:50, 8 November 2017 (UTC)[reply]

  • @AKlapper (WMF): Done, I hope its better. --Pencho15 (talk) 00:53, 9 November 2017 (UTC)[reply]
  • In the case of the pages you mention, en:Special:WantedFiles, en:Special:WantedTemplates and en:Special:WantedCategories all work perfectly fine, and I don't think they need any improvement, they are useful and updated adequately. I understand it is the same on every other wiki, and I know it is the case in the Spanish one.

    My only request is with en:Special:WantedPages, if you look at the english page, all the top entries and most of the page is full of Talk Pages and Assessments. Those are not encyclopedical pages and are not actually needed in the Wikipedia. If they are sometime, then they will be done, but should not appear on this list which is meant to indicate us the pieces of information missing from Wikipedia.

    Besides Talk pages and assessments, you also find some wanted files that should not be there, as, since you have indicated, they have their own section. A further problem is that those listed files that have already been created do not dissapear from this list after it is updated, and they remain there forever. Spam reports and user pages also appear in some wikis, making all this section pretty useless, when it could be very useful as the Wated Categories, Wanted Templates and Wanted Files pages are.

    If you look at the current list in the english wiki, the first entry that should actually be there is the number 14 of the list, Rehavia Rosenbaum, and only that one and articles number 15, 16, 21 and 24 are actual encyclopedical entries that should be listed.

    So my proposition is to update that single en:Special:WantedPages section in all wikis to make it useful changing its configuration so that Files, Talk Pages, Assessments, Spam reports, Categories, Templates, User Pages and all other kind of special entries that are not actual articles dissapear from it. Perhaps you could help me on how to phrase this in a simpler way in english so my proposition is clear? --Pencho15 (talk) 06:09, 12 November 2017 (UTC)[reply]

    So, problem: In Special:WantedPages, wanted article are entangled together with other type of wanted pages, including talk pages, templates, non-content pages and Wikipedia pages.

    Thus, proposed solution: Make a new special page that would only display wanted pages from the main namespace. C933103 (talk) 21:21, 12 November 2017 (UTC)[reply]

  • @Pencho15: I am familiar with the issues with Special:WantedPages, so I have copyedited your proposal to make it more clear what the issue is, and how it could be fixed. Hope this is okay!

    To elaborate on what I know: A while back I actually looked at the code for Special:WantedPages. It suffers from many problems, but the main one is that the query is very inefficient. If we narrowed it down to the mainspace, that should help. The other major issue is elaborated at the proposal Stop ifexist checks from appearing in Special:WhatLinksHere. This causes false positives to show up in the list.

    Also note that if we fix WantedPages as proposed, we should probably also add the necessary options to WhatLinksHere, too (exclude ifexist checks). Otherwise the number of links that are shown in WantedPages will not match the number of links shown at WhatLinksHere. MusikAnimal (WMF) (talk) 01:45, 21 November 2017 (UTC)[reply]

imo, We should allow filtering by namespace for all query special pages (That is do the query once for each namespace, and store the results for each namespace, and then add an index on qcc_namespace so we could efficiently show them). Bawolff (talk) 22:51, 28 November 2017 (UTC)[reply]

Voting

[edit]

Use map for Nearby

  • Problem: Special:Nearby is not really useful in its current form – it displays a list with some articles, but the user can neither broaden the area nor select a completely other place (or select any place if the browser doesn’t have the necessary API).
  • Who would benefit: Users not having the Android or iOS app who want to use Nearby.
  • Proposed solution: Use a map on Special:Nearby (or let the user chose between the list and map format).
  • More comments: Maybe a search field with traditional GET request could be added to make this function usable at all for JavaScript-less users.
  • Phabricator tickets:
  • Proposer: Tacsipacsi (talk) 16:33, 17 November 2017 (UTC)[reply]

Discussion

[edit]

I'd also would just love to make it possible to easily enable this layer when you click on a coordinate in an article. I see it as just another navigation method. If you want, you should be able to just keep clicking on, deeper and deeper into Wikipedia. An article, a map, annotations in an image, or a timeline, it shouldn't have to be an article if I want to go to a related article ! —TheDJ (talkcontribs) 14:12, 18 November 2017 (UTC)[reply]

I think it should be opt-in in articles, as it might not be useful everywhere (e.g. if there’s a map with the US national parks, there should be only the national parks on the map, not every article about places in the US). Also it’s less useful as wikis may not use Kartographer, and it’s nearly impossible to develop it for all existing map techniques. Otherwise it would be great, but wikis should change to Kartographer to let it work. —Tacsipacsi (talk) 14:30, 18 November 2017 (UTC)[reply]

As a kind of straw man demo, i worked up a bare bones example of putting search results on a map combined with using search's nearcoord keyword to restrict results to the area currently being viewed some time ago when we first released the geo keywords for search. This is of course not a solution to this request, but perhaps something to start thinking about how much data we have available (although this limits markers to 50 at a time so they don't overlap so much, and it's currently hard coded to enwiki but could be changed). EBernhardson (WMF) (talk) 21:58, 28 November 2017 (UTC)[reply]

What about linking the wiwosm/osm-on-ol-tool (description) and metawikimedia:WikiMiniAtlas? --X:: black ::X (talk) 14:57, 10 December 2017 (UTC)[reply]

I would rather imagine a Kartographer-based solution as it’s already on-wiki (and its map looks much better IMO). —Tacsipacsi (talk) 22:39, 10 December 2017 (UTC)[reply]

Voting

[edit]

Remember site notice dismissals

  • Problem: When browsing Wikipedia with Firefox in private mode, the same site notices pop up again in each new session after logging in, presumably because dismissing site notice(s) is recorded in (a) cookie(s) that get discarded after closing Firefox. This also happens when using multiple browsers or devices.
  • Who would benefit: Registered users using session cookies and/or multiple browsers or devices.
  • Proposed solution: Record the dismissal of a site notice on the server as part of the user data so that this information is preserved between sessions and identical in multiple browsers and devices.
  • More comments:
  • Phabricator tickets:

Discussion

[edit]

Voting

[edit]

Different password for changing email

  • Problem:

Two-step login has its own problem such as

  • difficulty to log in when you don't have access to your phone or key generator.
  • some gadgets such as AWB and HUGGLE doesn't support it

so some users do not migrate to two-step login, on another hand we afraid of hacking our user account. In my opinion, If some one's account is hacked the hacker shouldn't simply change email. if changing email at user preference has separated password it will help the user to reset his/her account by the email.

  • Who would benefit:

The hacked user account can be restored by the email which has other password and the hacker doesn't have access to changing the email.

  • Proposed solution:

Define a different password for changing the email to make hard the hacking process of an account.

  • More comments:
  • Phabricator tickets:

Discussion

[edit]

@Yamaha5: Are you aware of any websites with a log-in that offer such an option, and could you please name one? I can see that w:Multi-factor authentication can be cumbersome sometimes (and so can be entering a password in general but safety and security comes with some costs). However I don't see yet why having to remember two passwords instead of one would be a better solution. --AKlapper (WMF) (talk) 20:54, 8 November 2017 (UTC)[reply]

@AKlapper (WMF): my bank's website after inputting the first password asks some question which is two-step password without a key generator. as I said Two-step password with a key generator has its own difficulty. you can check the statistics which shows how many percentages of the users migrated to it. finally, we have many users which aren't migrated to two-step password and we should concern their security and we can't force them.Yamaha5 (talk) 21:14, 8 November 2017 (UTC)[reply]
@MaxSem:: mw:Manual:Bot passwords is useful for huggle but for AWB I can't use it also I want to secure my account if it is hacked I can restore the password by email. now hackers after hacking the account at the first they change the email! Yamaha5 (talk) 21:17, 8 November 2017 (UTC)[reply]
@AKlapper (WMF): Now w:Multi-factor authentication is only active for sysop's and non-sysop users can't use it Yamaha5 (talk) 10:52, 10 November 2017 (UTC)[reply]
@Yamaha5: actually some usergroups other than sysops are allowed to use it, just thought I correct that statement. Zppix (talk) 17:30, 12 November 2017 (UTC)[reply]
@Zppix: thank you for your correction. please mention which groups have this access? Yamaha5 (talk) 17:32, 12 November 2017 (UTC)[reply]
@Yamaha5: Administrators, Bureaucrats, Oversighters, Central notice administrators, Global renamers, WMF Office IT, WMF Support and Safety Zppix (talk) 17:53, 12 November 2017 (UTC)[reply]
most of them have sysop rights and are upper level than sysopsYamaha5 (talk) 18:09, 12 November 2017 (UTC)[reply]
Community Wishlist Survey 2017/Miscellaneous/2 factor authentication for all seems like a better place to direct efforts to. --Tgr (WMF) (talk) 23:14, 19 November 2017 (UTC)[reply]

A different password does little to do hacking harder; also, if someone hacks your account it does not make too much difference (wrt the amount of damage they can do) whether they can change your email address or not. And as long as it happens rarely, we can just rely on manual recovery. --Tgr (WMF) (talk) 23:09, 19 November 2017 (UTC)[reply]

I'm of the opinion that multiple different passwords for different things would be too confusing to most users. BWolff (WMF) (talk) 22:53, 28 November 2017 (UTC)[reply]

Voting

[edit]

A social wikipedia

  • Problem: actually all contact with the other users it's only with discussion pages and articles, but isn't possible to follow the actions of users who problably are your wikifriends
  • Who would benefit: users with a long experience on wiki
  • Proposed solution: create a social connections with users. Possibility to check easily what they do and what they modify.
  • More comments:
  • Phabricator tickets:

Discussion

[edit]
@Codas: - what do you think of ChristianKl's comment? Is there a way you have in mind for doing this? There's no actionable way to do this defined in the proposal and I'd have to close it if it's still the same within the next 24 hours. Thanks. -- NKohli (WMF) (talk) 20:03, 20 November 2017 (UTC)[reply]
  • I think the best way is to consider connections like facebook. You can ask to follow someone and you keep a connection with him. So you can see what he does, which pages modify and articles create. So you can help him in some situations. Or for example you can remember a specific user for some arguments. For this reason should be good to create a profile with interests in wikipedia. But le't's keep discussion on first idea... --Codas (talk) 20:35, 20 November 2017 (UTC)[reply]
But user contributions are already public. How would this be different? I'm trying to understand what you mean by "create a social connections with users" exactly. Please clarify in the proposal. -- NKohli (WMF) (talk) 20:43, 20 November 2017 (UTC)[reply]
Yes they are public, but I think it’s necessary to have a link in your account to watch your connections. Actually you have to remember who is and search like all users. In this case is more easy to remind and keep,in touch. Or maybe a page with all modify from your favorite users. --Codas (talk) 07:22, 21 November 2017 (UTC)[reply]
The proposal still does not say what you mean by "social connection". Where will the other person's activity show up? In the watchlist? This proposal is very similar to User watchlist, as BDavis pointed out above. Several users have expressed concerns about this. -- NKohli (WMF) (talk) 23:59, 21 November 2017 (UTC)[reply]
  • Comment Comment Isn't this basically Facebook? As of now the only real difference between Wikipedia ans Facebook is that Wikipedia requires references, so I'm opposed to such an idea. --Donald Trung (Talk 🤳🏻) (My global lock 🔒) (My global unlock 🔓) 10:12, 29 November 2017 (UTC)[reply]
  • Comment Comment The real mission isn't to be Facebook, but use the social connection to help each other or add your skills with similar users. --Codas (talk) 08:59, 30 November 2017 (UTC)[reply]
  • "Social" does not equate to "same as facebook". Facebook have not invented nor defines what social is. Actaully, is Facebook "social", really? So my support vote does not go for facebook-y whatchlists, or whatver features. It goes to: study what can be done so that "comunity" makes sense (because I doubt it does in most "social media"). Maybe we're already doing those studies, if so, this is moot. - Nabla (talk) 23:59, 1 December 2017 (UTC)[reply]
  • Potentially if this is abuse prone, require the second party consent for the first party to follow them. Gryllida 00:49, 4 December 2017 (UTC)[reply]
  • This is both the best and worse tool for bettering climate on-wiki. The best because it might actually work, people are social animals, and the worse because it invites to group building and stalking. Create a group of users you are following, and another group of users following you. Both groups should be public. Limit the number you can follow that does not follow you back. Allow the number of coreferences to grow if you cooperate with the users on some articles. Allow a user to block another user from following {him|her}self. — Jeblad 00:53, 11 December 2017 (UTC)[reply]

Voting

[edit]

Improve the Score extension

  • Problem: The Score extension is not particularly easy to use due to not having a graphical input option, and has not been updated in almost four years. Furthermore, the rendering is pixelated and PNG instead of SVG; and the image is hard to resize (a paper size has to be specified) and align (a div has to be made to align it and the audio player to anything). In addition, the double bass and possibly some other instruments are not playable. These are all limiting factors in its use, and partly as a result it is only used on 396 English Wikipedia articles and 873 English Wikisource pages, when it could be used to a much greater extent than is currently possible or desired. It is indisputably inferior to the MuseScore.com web app (closed source based on open-source software), which does not have any of the aforementioned problems, has an animated and scrolling score (entirely SVG), and was probably written by a few interns and MuseScore's permanent staff of three people.
  • Who would benefit: Wikipedia and Wikisource editors, and readers of music-related articles
  • Proposed solution: Any or all of: improving the extension so that the output is an SVG file and can be easily resized and aligned; adding an OCR helper for printed and engraved scores like the DJVU reader on Wikisource; allowing MIDI/MusicXML/MuseScore files from Commons to be read by the extension; adding a visual editing function; replacing the current extension with MuseScore-based renderer/playback.
  • More comments:
  • Phabricator tickets:

Discussion

[edit]

Voting

[edit]

Kartographer improvements

  • Problem: The development of the Kartographer tool should be continued. Main wishes are map internationalization (exchange of Latin and non-Latin labels) and the addition of zoom level 19. Minor wishes are the move of all controls to the left map side like nearby, full-screen, layers controls, adding an additional zoom-level control, several pushpin symbol improvements like usage of short strings, 3-digit numbers etc. A nearby map mode showing links to nearby articles should be added. Kartographer documentation should be improved.
  • Who would benefit: All wikis including Wikipedia, Commons, Wikidata, Wikivoyage, etc.
  • Proposed solution: OpenStreetMap supports map internationalization with, for instance, name:en tag in case of non-Latin names. International names can be fetched from Wikidata, too. Maybe a collaboration with OSM programmers is useful. Instead of complete graphical pushpins, pushpins with a text box should be added.
  • More comments: It is very difficult to estimate the developing time. I think minimumly a year, better two years are needed.

Discussion

[edit]

It's a timely proposal. I would like to list two additional features, although not sure whether they are major or minor:

  • We need the functionality of reading and displaying POIs from several articles. The current version of Kartographer would only allow to display objects contained in one article. Retrieving information from multiple articles is needed for lists on Wikipedia (e.g., cultural heritage) and for travel guides (Wikivoyage).
  • We should be able to download coordinates as GPX or similar files for offline use. This feature is especially important for Wikivoyage, because travel guides are often used offline, but it may also have broader implications, because it is a general tool for re-using geo-data collected in Wikimedia projects.

--Alexander (talk) 19:17, 18 November 2017 (UTC)[reply]

Supporting this. It is unfortunate the development has been stopped.--Ymblanter (talk) 20:11, 18 November 2017 (UTC)[reply]
  • Can only support this and the general return to the development of Maps as one of the people involved in making the Russian Wikipedia the first major Wikimedia project to include interactive maps instead of Geohack links as a default option. It was really sad to see that just after we got the community consensus on this, the entire Maps team was set off IIRC. stjn[ru] 00:01, 19 November 2017 (UTC)[reply]
  • I would very much like for this to happen, and note that phabricator:tag/map-styles has a lot more styling bugs that need to be fixed. Jc86035 (talk) 01:44, 20 November 2017 (UTC)[reply]
  • I've added T155601 to the list above. This bug tracks the bugs the development team (or what's left of it) think are the most important. - Gareth (talk) 09:17, 22 November 2017 (UTC)[reply]
  • I've added T181604 to the list above. A pain for it:voy since long time. --Andyrom75 (talk) 08:07, 29 November 2017 (UTC)[reply]
  • I see a few issues with our current handling of maps, not all of which have existing Phabricator tasks. Kartographer seems to handle the cases where a territory is divided into multiple, colour-coded subregions poorly - to the point where even relatively-simple regions like voy:Adirondacks are using static maps instead of dynamic maps and (de-facto) a "star" featured article still needs a hand-made static map. GPX tracks (like the trail traced onto voy:Oregon Trail) need to be converted to GeoJSON and we don't have a good means of storing these (they're either in the article itself or on an external wiki); the switch to GeoJSON also means there's no easy way to download the trace from the article and load it onto a handheld GPS (as Garmin is natively GPX). There's no way to turn off OSM's POI's (cities, villages, hotels, museums...) if we want to replace them with our own, our POI markers are just generic circle-pointers instead of AIGA-style type-specific icons for food, lodging and attractions. The sequentially-numbered icons stop at 99, 99, 99... and the dynamic map provides no suitable fallback for the print version. Dynamic maps are a great idea, but they need work before they will ever replace the painstaking creation of hand-made static cartography. K7L (talk) 16:10, 29 November 2017 (UTC)[reply]

Important note from Community Tech

[edit]

Hi everyone, There's a lot of excitement and support around this proposal, so I want to make sure that people understand the scope of what Community Tech will be able to do, if this ends up in the top 10.

This proposal and the discussion comments include a bunch of feature requests -- some small, some really big -- and the proposal specifically mentions this taking a year or two of the team's work. That level of a request is out of scope for the Community Wishlist Survey. We're responsible for addressing the top 10 wishes in 2018, so if this wish gets into the top 10, it'll be 1 out of 10 projects that the team works on. We can't turn the Community Tech team into a Maps team for a year. :)

It's hard to estimate the amount of work that's involved in all of these tickets and requests; that's investigation work that we would have to do as the beginning of the project. What I can say now is that we can investigate all the requests, report back with an explanation of what's feasible, and then do the work that we can feasibly do, given the size of the other 9 wishes in the top 10. We take all the top 10 wishes seriously, so it won't be a few easy fixes and then we blow the rest off; we'd want to make a significant improvement that honors the spirit of this proposal.

I should have posted about this before voting started -- RolandUnger, I'm sorry for the oversight. Let me know if folks have any questions. Thanks! -- DannyH (WMF) (talk) 20:34, 28 November 2017 (UTC)[reply]

@DannyH (WMF): Would the WMF consider bringing back the Maps team? Jc86035 (talk) 07:55, 29 November 2017 (UTC)[reply]
Maybe if this ends at #1 there is ;) —TheDJ (talkcontribs) 14:29, 29 November 2017 (UTC)[reply]
As far as I know, there aren't any plans to bring back a specific Maps development team. And -- sorry to contradict TheDJ -- I want to make it clear that voting for this wish will not lead to a Maps team. Decisions about a full-time Maps team are totally separate from the wishlist survey. But voting for this wish does mean that the Community Tech team will do some work on maps next year, which is exciting, and I'm sure we'll make some good improvements. -- DannyH (WMF) (talk) 18:08, 29 November 2017 (UTC)[reply]
DannyH (WMF), the goal of this proposal is having at least some of the pending Kartographer issues fixed. Independent of that, it would be good to know how to start a discussion about the full-time Maps team, which is clearly wished for by the community. --Alexander (talk) 19:32, 29 November 2017 (UTC)[reply]
@DannyH (WMF) you are not contradicting me. I'm saying that anything that ends up in the top 3 of the wishlist, wether or not the com tech team can take on the task or not, is likely to shape long term decision making in some form or another ;) —TheDJ (talkcontribs) 20:13, 29 November 2017 (UTC)[reply]
This proposal shows, what the community is eagerly awaiting for. Every mobile app is asking for access to the owners position. Every information is geotagged nowadays. Coordinates and the presentation on a map is essential to every information system. We maintain lists of e. g. monuments of every town or region on Wikipedia. Proper map features an the integration to the website (including the mobile version) are really important. -- DerFussi 21:25, 30 November 2017 (UTC)[reply]
Many thoughts, proposals and wishes are contributor orientated. We should think about the users and readers as well - because they do not vote here, because they do not know the meta wiki. -- DerFussi 21:42, 30 November 2017 (UTC)[reply]

Voting

[edit]

Stop ifexist checks from appearing in Special:WhatLinksHere

  • Problem:

#ifexist is a very useful tool - it lets you check if a page exists in template logic. However, doing the check also makes a link appear in Special:WhatLinksHere for the page whose existence was checked - and that causes problems for Wikimedians who are checking links to disambiguation pages, as there's no way to stop the link from appearing. This is part of a wider issue, as doing the same check in Lua also causes this problem.

  • Who would benefit:

People writing template code who want to use #ifexist to provide extra functionality, but can't. People doing disambiguation checking while people are using #ifexist statements.

  • Proposed solution:

Either stop #ifexist from creating a link in Special:WhatLinksHere, or create a new magic word that does the same as #ifexist without creating the link.

  • More comments:

This is a long-running problem: phabricator:T14019 was filed in 2007! It was also included in the 2015 Community Wishlist. I came across this more recently, when trying to use #ifexist to check for location redirects in Wikidata infoboxes, which caused problems e.g. see en:Wikipedia:Village_pump_(technical)/Archive_158#Wikidata_problem and is having to be manually worked around. It also caused problems at en:Template_talk:Infobox_journal#Links_to_DAB_pages with redirects to journal titles.

  • Phabricator tickets:

phabricator:T14019

Discussion

[edit]

What's the use case that causes an ifexist check for a disambiguation page? How widespread is this issue? Max Semenik (talk) 21:40, 18 November 2017 (UTC)[reply]

@MaxSem: See the links I posted above. As an example of the case that introduced me to this issue, see en:Telescope Array Project - in the infobox, the desired wikilink for the location is to en:Millard County, Utah, however from Wikidata all I can fetch is en:Millard County, en:Utah (or the same text without wikilinks) - I can then find the desired link by checking {{#ifexist:Millard County, Utah}} and I can then use that link - however that exact same code used at en:Great Melbourne Telescope will be {{#ifexist:Victoria}}, which finds a link to en:Victoria, which is a disambiguation page. This then causes the problem described at [3]. This issue is a fundamental one that means that we can't use ifexist like this, which is a real shame. And, again, this was first reported way back in 2007 in other situations! Thanks. Mike Peel (talk) 21:58, 18 November 2017 (UTC)[reply]
In this particular case, it sounds like you should use the interwiki link from WD instead of trying to construct the county name by hand. Max Semenik (talk) 22:47, 18 November 2017 (UTC)[reply]
@MaxSem: @Nikkimaria objects to using "Millard County, Utah" rather than "Millard County, Utah" per en:WP:MOS, so I'll let them argue the case for that here. But regardless of that, #ifexist should be something that is *usable* by template developers rather than something that causes unnecessary problems. Thanks. Mike Peel (talk) 23:02, 18 November 2017 (UTC)[reply]
Probably the best way to solve that would be for the system to get the wikilink of the claim instead of ifexist (i.e. propertyX.wikipedia.en."Millard County, Utah" or whatever in Lua). Both are expensive functions anyway, so it's not like you would be trading performance. --Izno (talk) 04:38, 19 November 2017 (UTC)[reply]
I wouldn't narrow this just to disambiguation pages. I know about many usages where {{#ifexist:}} has been used for purpose of intentional not linking the non-existent pages, but linking if they exist (in other words blue link or plain text (instead of red link)). This is typically used in two cases: 1) when the page has been deleted and you don't want it to be re-created, 2) you have different mechanism to link to non-existent page, typically something like {{#ifexist:Page|[[Page]]|[{{fullurl:Page|action=edit&editintro=info&preload=template}} Create the Page]}} (mind that non-existent pages linked via the latter construction are not part of Special:WantedPages).
Danny B. 23:21, 18 November 2017 (UTC)[reply]

This problem is widespread in ruwiki where there are modules/templates that automatically check the existense of various pages (and link there if they exist). Instruments for checking unexistent linked pages (e.g. Special:WantedPages, tools that update wikiproject pages, like this one, relying on red links) fail when they come across pages that are checked by #ifexist (user complaints, in Russian). Jack who built the house (talk) 08:31, 19 November 2017 (UTC)[reply]

Maybe do not simply filter those ifexist links out of the result, but allow to filter them in or out. → «« Man77 »» [de] 14:11, 2 December 2017 (UTC)[reply]

@Tim.landscheidt, Thomas Obermair 4, Dominic Z., Ninovolador, Galobtter, L3X1, LikeLifer, Joalpe, Metrónomo, MBH, 91i79, Kaganer, Vort, Iliev, Facenapalm, and NickK: If you're still interested in this issue, please see Community Wishlist Survey 2022/Miscellaneous/Check if a page exists without populating WhatLinksHere. (Only pinging those that didn't vote for this in 2019 and 2021 to cut down on pings.) Thanks. Mike Peel (talk) 19:34, 28 January 2022 (UTC)[reply]

Scribunto

[edit]

It should be also noted, that Scribunto functions of title library which work with non-existent pages, also produce a record on Special:WantedPages. So preferably the behavior should be consistent in both cases of Scribunto and wikitext.
Danny B. 23:28, 18 November 2017 (UTC)[reply]

Scribunto doesn't do anything differently. I suspect you're mistaken about #ifexists not showing on Special:WantedPages. Anomie (talk) 00:42, 20 November 2017 (UTC)[reply]
I just checked it. #ifexists does indeed result in an entry on Special:WantedPages. Anomie (talk) 15:52, 11 December 2017 (UTC)[reply]

Why this happens

[edit]

MediaWiki needs some way to know which pages need to be updated when a page is created or deleted. Usually this is because of a wikilink, so it can be turned from red to blue or vice versa, but it has to happen for things like #ifexist too. If someone were to implement this request, it would require either a new field to record the "kind" of link or a whole new table. Anomie (talk) 00:42, 20 November 2017 (UTC)[reply]

Voting

[edit]

Move orphaned revisions to the archive table

  • Problem: Previously, from 2013 to 2016, when moving a page over a redirect, the redirect is left behind as an orphaned revision in the revision table. Nowadays (T106119), a deletion log entry is generated and the redirect goes in the archive table. However, even after that was resolved, there are still lots of orphaned revisions left behind.
  • Who would benefit: Users who do not want to see the orphaned revisions showing up when viewing the contribs in the mobile interface (see T151124).
  • Proposed solution: Move all the orphaned revisions to the archive table to match the current behavior. The ar_namespace and ar_title fields should only be approximations, as it might be the case that the same page title has been moved over more than one redirect title.
  • More comments:

Discussion

[edit]

Voting

[edit]

Autoarchive provided

  • Problem:
    • Millions of talk pages stretching indefinitely require editors to individually create archive settings for talk pages, so that older entries, closed discussions etc are archived.
    • This is a colossal waste of time.
    • This could be made a lot easier if semiautomated tools were provided
  • Who would benefit:
    • Wikipedia, by decreasing duplicate editing
    • Editors, by removing old or closed discussions
    • Editors using mobile and other devices, by decreasing the size of talk pages, making them easier to load
    • Articles, by ensuring discussions are up to date
  • Proposed solution:
    • As part of the editing toolkit an option to "Add auto archiving" is provided. The editor can insert some settings (eg how many threads to leave, how old threads should be before archiving, what style archive box should be used).
    • Selected settings are automatically inserted into talk page with a template that informs other editors of those, and a small box providing links to archived talk page entries
  • More comments:
  • Phabricator tickets:

Discussion

[edit]

Structured Discussions can auto-archive :-) --Tgr (talk) 08:26, 28 November 2017 (UTC)[reply]

en:User:Anne drew Andrew and Drew/SetupAutoArchive exists for enwiki. Galobtter (talk) 12:43, 3 December 2017 (UTC)[reply]

Voting

[edit]

Multiple protocol support in Special:Linksearch

  • Who would benefit: Anyone who uses this special page.
  • Proposed solution: At the very least, linksearch should return results for HTTP and HTTPS combined when no protocol is specified.
  • More comments: Google now (rightfully so) boosts rankings for HTTPS sites and I am seeing HTTPS spam much more often as a consequence. Why do I need to ask the WMF to address glaring technical debt and software rot in MediaWiki three times and wait almost a decade for a fix?

Discussion

[edit]

Voting

[edit]

2 factor authentication for all

  • Problem: Currently 2 factor authentication is by default only available to users with elevated user rights, like sysops. This is due to multiple unsolved usability problems that caution us
  • Who would benefit: Registered users
  • Proposed solution: Solve the usability options and improve account recovery process.
  • More comments:

Discussion

[edit]

Voting

[edit]

Feed for search engines

  • Problem: Currently, there is no newsfeed for several projects of Wikinews. That's why Wikinews is badly indexed in Google / Bing. There are some exceptions like English Wikinews, but feeds are installed by members of the project itself and not by wiki-software.
  • Who would benefit: People who are searching for news in search-engines. Also, Wikinews projects would benefits because a higher indexation in search engines.
  • Proposed solution: Introduce a general feed system for all Wikinews-projects
  • More comments:

Discussion

[edit]

Voting

[edit]