Wikipedia:Bot requests/Archive 30
This is an archive of past discussions on Wikipedia:Bot requests. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current main page. |
Archive 25 | ← | Archive 28 | Archive 29 | Archive 30 | Archive 31 | Archive 32 | → | Archive 35 |
Template:Administrative divisions of South Ossetia
Hello.
Need add Template:Administrative divisions of South Ossetia in articles included in this template. Advisors (talk) 12:24, 27 July 2009 (UTC)
- Sounds like a good task for WP:AWB, so a better place to request might be WP:AWB/Tasks. Cheers - Kingpin13 (talk) 12:30, 27 July 2009 (UTC)
I just ran this, it has already been done. Rich Farmbrough, 02:35, 1 August 2009 (UTC).
Date templates
A bot is needed, please, to convert existing "first broadcast", "foundation", "founded", "opened", "released", or similar dates in infoboxes, to use {{Start date}}, so that they are emitted as part of the included hCard or hCalendar microformats. Further details on my to-do page. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 22:25, 30 July 2009 (UTC)
- In hand. Rich Farmbrough, 02:27, 1 August 2009 (UTC).
Implementation of ANI archiving following possible name change
There is a proposal at WP:AN that has considerable support, to rename the somewhat historic Wikipedia:Administrators' noticeboard/Incidents to the more obvious Wikipedia:requests for administrator assistance (RAA).
If this passes, then there are two coding or bot related issues related to implementation, and feedback is sought.
- Archiving
- It's proposed to use daily archiving similar to AFD. That might take one of three forms:
- RAA is combed regularly by a bot. Sections that have not been touched in <N> hours (N=48 or 72) are moved to an archive page based upon the date the section was started, such as WP:RAA/Archives/DATE.
- Ditto, but checks for sections being tagged "resolved" rather than looking at section start date. (May better identify "threads" and encourage old thread closure.)
- Requests are added to an archive page, and the main RAA page transcludes all archive pages that have unresolved threads.
- The former two is easier in one sense, that RAA stays as one page. The latter would mean all links to RAA archives stay valid indefinitely, as the thread doesn't change location when archived.
- Since it's not clear which of these the community would prefer, comments on implementation for both options would be good.
- Search
- While not strictly a bot issue, a move like this would initially break an aspect of searching (unless the search is recoded): ideally all admin requests can be searched with one search, and to do this they cover pages starting "Wikipedia:Administrators' noticeboard[/whatever]". That said, there are administrator pages that do not start that way anyway, at present.
- What ways might be suggested to make searching as easy as possible, if the page is renamed?
Discussion and input sought - when it's done it can then be summarized at AN to inform the discussion there. FT2 (Talk | email) 17:56, 1 August 2009 (UTC)
- the first part regarding bots should be fairly simple to implement with if you want the main page archived by date. I dont think the third option would be a good idea. makes watching the page as a general very difficult. Due to arbcom restrictions I cannot modify or give out the code that I have that I use for archiving resolved/old threads but re-creating it should not be that difficult. searching goes beyond bot work into the area of complex tools that would need to be hosted on the toolserver. βcommand 04:10, 2 August 2009 (UTC)
- ClueBot III would be very simple to drop in. You'd just need to config a CB3 template. CB3 can do 1 or 2. 3 is just ... ugly.
- Example CB3 template:
<!-- Please see http://en.wikipedia.org/wiki/User:ClueBot_III/Documentation before editing this. --> {{User:ClueBot III/ArchiveThis |archiveprefix=Wikipedia:Requests for administrator assistance/Archives/ |format=Y/F/d |age=72 |index=no |minarchthreads=0 |minkeepthreads=0 |archivenow=<nowiki>{{User:ClueBot III/ArchiveNow}},{{resolved|,{{Resolved|,{{done}},{{Done}}</nowiki> |header=<nowiki>{{Talkarchive}}</nowiki> |headerlevel=2 |nogenerateindex=1 |archivebox=no |box-advert=no }}
- -- Cobi(t|c|b) 04:33, 2 August 2009 (UTC)
- which archive does that go to, first timestamp or last? what FT2 was looking for was archival based on date of Original post not last post. βcommand 04:42, 2 August 2009 (UTC)
- Last. -- Cobi(t|c|b) 06:48, 2 August 2009 (UTC)
- which archive does that go to, first timestamp or last? what FT2 was looking for was archival based on date of Original post not last post. βcommand 04:42, 2 August 2009 (UTC)
- -- Cobi(t|c|b) 04:33, 2 August 2009 (UTC)
Wikipedia:Requested articles
Remove lines where the first word is a blue link on the lists at the page Wikipedia:Requested articles pages.--Otterathome (talk) 16:33, 2 August 2009 (UTC)
Articles missing the US County Infobox
We'd like a bot that can find all counties in the US that don't have the Template:Infobox U.S. County infobox. Articles missing that template should be placed into Category:Missing U.S. County Infobox. Timneu22 (talk) 12:15, 23 July 2009 (UTC)
- Category:Alabama counties
- Category:Alaska boroughs
- Category:Arizona counties
- Category:Arkansas counties
- Category:California counties
- Category:Colorado counties
- Category:Connecticut counties
- Category:Delaware counties
- Category:Florida counties
- Category:Georgia (U.S. state) counties
- Category:Hawaii counties
- Category:Idaho counties
- Category:Illinois counties
- Category:Indiana counties
- Category:Iowa counties
- Category:Kansas counties
- Category:Kentucky counties
- Category:Louisiana parishes
- Category:Maine counties
- Category:Maryland counties
- Category:Massachusetts counties
- Category:Michigan counties
- Category:Minnesota counties
- Category:Mississippi counties
- Category:Missouri counties
- Category:Montana counties
- Category:Nebraska counties
- Category:Nevada counties
- Category:New Hampshire counties
- Category:New Jersey counties
- Category:New Mexico counties
- Category:New York counties
- Category:North Carolina counties
- Category:North Dakota counties
- Category:Ohio counties
- Category:Oklahoma counties
- Category:Oregon counties
- Category:Pennsylvania counties
- Category:Rhode Island counties
- Category:South Carolina counties
- Category:South Dakota counties
- Category:Tennessee counties
- Category:Texas counties
- Category:Utah counties
- Category:Vermont counties
- Category:Virginia counties
- Category:Washington (U.S. state) counties
- Category:West Virginia counties
- Category:Wisconsin counties
- Category:Wyoming counties
- There are a few other articles in Category:Counties of the United States; these are the most clearcut, however. Let me know if any other categories should be added to the list. — madman bum and angel 03:41, 24 July 2009 (UTC)
- Those categories are what I would use. List of U.S. counties in alphabetical order may also have some value, but probably not as much as those categories. Timneu22 (talk) 11:13, 24 July 2009 (UTC)
- Some of the state county categories above include former counties which should probably be excluded. An example would be Tennessee County in Category:Tennessee counties. -- Ichabod (talk) 12:34, 24 July 2009 (UTC)
- Well, I can exclude articles in Category:Former counties of the United States. Are there any other categories that should be excluded? — madman bum and angel 21:31, 24 July 2009 (UTC)
- Probably not. The tagging of these articles in the category is just a helpful thing so we know what to clean up. If there is an extra article or two in the category we will notice it during cleanup; it shouldn't be a big deal. Timneu22 (talk) 10:16, 25 July 2009 (UTC)
BRFA filed: Wikipedia:Bots/Requests for approval/MadmanBot 9 — madman bum and angel 05:13, 26 July 2009 (UTC)
- Timneu22, the bot request for approval was approved for trial and eventual approval given that Category:Missing U.S. County Infobox be put on the talk page of the articles. I don't see a major problem with that; encyclopedic metadata on the articles, all other metadata on the talk pages. Is that all right with you? — madman bum and angel 19:46, 1 August 2009 (UTC)
- Certainly. No one really cares about talk/article - we just need the information. Thanks again. Timneu22 (talk) 20:33, 1 August 2009 (UTC)
- Done, it would seem. Please see the BRFA for more information. — madman bum and angel 21:33, 2 August 2009 (UTC)
- Certainly. No one really cares about talk/article - we just need the information. Thanks again. Timneu22 (talk) 20:33, 1 August 2009 (UTC)
WP:LOMJ maintanance
WP:LOMJ should contain a list of articles that dont exist.
It would be good if entries are removed if there is an article, and it contains either:
- {{Infobox Journal}}, {{Infobox Academic Conference}}, or {{Infobox Magazine}}
- {{Academic-journal-stub}},{{journal-stub}}, {{humanities-journal-stub}}, {{sci-journal-stub}}, {{biology-journal-stub}}, {{chem-journal-stub}}, {{engineering-journal-stub}}, {{med-journal-stub}}, {{physics-journal-stub}}, {{socialscience-journal-stub}} (and their redirects)
If a page does exist at the name of a journal, the bot should also apply the above logic for a page disambiguated with " (journal)". John Vandenberg (chat) 02:31, 4 August 2009 (UTC)
Bot for when Template value ends with break
i dont know how to program but all the bot would have to do is: have the script : [1] installed then load articles from [2]
and go to the top of the page and click "check"
then wait to click save
and then start the process over
P.S. thats what im doing now manually but its very tedious. --Tim1357 (talk) 03:16, 4 August 2009 (UTC)
Date tag bot request
Is it possible to create a bot that would change tags with incorrectly formatted date tags?
For example:
{{unreferenced|Date=August 2009}}
would mean that the "Date=" parameter is ignored, as it should be "date=" instead.
So basically, the bot should change "Date=" to "date=". This would be similar to how Smackbot changes == External Links ==
to == External links ==
I think this would be really useful - I have recently been working on undated unsourced articles, and come across this quite often! I've also done it myself, but normally catch it in the preview!
Regards, -- PhantomSteve (Contact Me, My Contribs) 10:19, 3 August 2009 (UTC)
- I'm not sure which other tags use "date=" (I know orphan, wikify, etc) - I assume all of them would need the d to be lowercase? -- PhantomSteve (Contact Me, My Contribs) 10:23, 3 August 2009 (UTC)
- Surely it would be better to fix the templates, than to search through the thousands of links. I'm not sure how to change the template (except from a really messy way, where you just repeat the "date" bit for "Date", so it is possible). And the template is protected s only admins may edit. But I don't think this is a bot task, should be fixed by a template-savvy user - Kingpin13 (talk) 10:30, 3 August 2009 (UTC)
- There are a few templates which would need sorting out - a quick search reveals the following templates (at least) use the 'date=' option: Template:cleanup, Template:cleanup-reason, Template:cleanup-section, Template:expand, Template:expand-article, Template:orphan, Template:prune, Template:refimprove and Template:unreferenced. Where do I put this request for template changes? -- PhantomSteve (Contact Me, My Contribs) 12:29, 3 August 2009 (UTC)
- Surely it would be better to fix the templates, than to search through the thousands of links. I'm not sure how to change the template (except from a really messy way, where you just repeat the "date" bit for "Date", so it is possible). And the template is protected s only admins may edit. But I don't think this is a bot task, should be fixed by a template-savvy user - Kingpin13 (talk) 10:30, 3 August 2009 (UTC)
- I'm not sure which other tags use "date=" (I know orphan, wikify, etc) - I assume all of them would need the d to be lowercase? -- PhantomSteve (Contact Me, My Contribs) 10:23, 3 August 2009 (UTC)
- removed request as this is done by SmackBot already - thanks to its creator Rich Farmbrough for clarifying this for me! -- PhantomSteve (Contact Me, My Contribs) 18:44, 4 August 2009 (UTC)
WP:INDIANA Project banner tagging
Is it possible for a bot to check the articles in each of the following 641 categories and ensure that the Indiana project banner ({{WikiProject Indiana}}) is on the talk page? The bot could make the following decisions:
A. If the Indiana banner currently on the article talk page is {{WPINDIANA}}, change it to {{WikiProject Indiana}}, and if possible leave the assessment parameters in place. (This is something that could be done in pereptuity if there is a bot that does that)
B. If no Indiana banner is present on the talk page of an article, place one. ({{WikiProject Indiana}})
C. If the item in the category is in the file namespace, tag it as such. ({{WikiProject Indiana|class=image}})
D. If the item is in the template namespace, tag it as such. ({{WikiProject Indiana|class=template}})
E. On the categories themselves, also check for a Indiana banner, and if none is present add one ({{WikiProject Indiana|class=category}})
F. If the item is in the portal namespace, tag it is such. ({{WikiProject Indiana|class=portal}})
G. If possible, could a list of all the article that have been altered by the bot for this task also be created?
H. (added by TonyTheTiger (talk · contribs) who would also like to use the bot for WP:CHICAGO's WP:CHIBOTCATS). Check to see if other projects have listed the article as class=GA, class=FA, or class=FL and use the same class.
- I don't think the plugin can do this presently, I've asked for the functionality though. –xenotalk
- I don't know how they did it but both SatyrTN (talk · contribs) and Stepshep (talk · contribs) who ran prior bots were able to do this. This is one of the most important things an autotagging bot can do for a project.--TonyTheTiger (t/c/bio/WP:CHICAGO/WP:LOTM) 03:54, 23 July 2009 (UTC)
- Looks like SatyrBot was using a PHP script s/he wrote themself. Not sure how ShepBot did it, it seems they were using the same plugin I do. –xenotalk 13:22, 24 July 2009 (UTC)
- I can try pre-parsing to look for the good article and featured article/list templates to do this. –xenotalk 23:45, 25 July 2009 (UTC)
- Looks like SatyrBot was using a PHP script s/he wrote themself. Not sure how ShepBot did it, it seems they were using the same plugin I do. –xenotalk 13:22, 24 July 2009 (UTC)
- I don't know how they did it but both SatyrTN (talk · contribs) and Stepshep (talk · contribs) who ran prior bots were able to do this. This is one of the most important things an autotagging bot can do for a project.--TonyTheTiger (t/c/bio/WP:CHICAGO/WP:LOTM) 03:54, 23 July 2009 (UTC)
- I don't think the plugin can do this presently, I've asked for the functionality though. –xenotalk
I. (added by TonyTheTiger (talk · contribs) autostub class=stub articles with stub templates.
- FYI, the plugin is not currently smart enough to do it this way [3]. One has to build a list from the stub categories in order to autostub. As such, it would be helpful to segregate the stub categories ahead of time. –xenotalk
- I don't know how they did it but both SatyrTN (talk · contribs) and Stepshep (talk · contribs) who ran prior bots were able to do this.--TonyTheTiger (t/c/bio/WP:CHICAGO/WP:LOTM) 03:49, 23 July 2009 (UTC)
- I can auto-tag the ones in the Chicago-stub cats, but that won't catch ones in non-Chicago stub cats. Your "J" suggestion, if implemented by the plugin programmer(s), would supplant this though. –xenotalk 13:22, 24 July 2009 (UTC)
- One possible way: :I think you could use AWB's pre-parse mode to first go through the list of articles and remove those based on your stub criteria, and then use the list option to convert them to their equivalent talk pages. Rjwilmsi 19:22, 17 July 2009 (UTC) (this is more of a note-to-self, really) –xenotalk 12:22, 25 July 2009 (UTC)
- I can auto-tag the ones in the Chicago-stub cats, but that won't catch ones in non-Chicago stub cats. Your "J" suggestion, if implemented by the plugin programmer(s), would supplant this though. –xenotalk 13:22, 24 July 2009 (UTC)
- I don't know how they did it but both SatyrTN (talk · contribs) and Stepshep (talk · contribs) who ran prior bots were able to do this.--TonyTheTiger (t/c/bio/WP:CHICAGO/WP:LOTM) 03:49, 23 July 2009 (UTC)
- FYI, the plugin is not currently smart enough to do it this way [3]. One has to build a list from the stub categories in order to autostub. As such, it would be helpful to segregate the stub categories ahead of time. –xenotalk
J. (added by TonyTheTiger (talk · contribs) If possible, mainspace articles with neither class=GA, FA, FL or stub could be tagged with the most common class used by other projects.
Note: Not all of the subcategories of Category:Indiana are listed here, as some are deemed to not be within the projects scope, so just pointing at Category:Indiana and subcats should not be done. Once the tagging is complete, projects members will be able to go through and assess each article's quality and importance and at that time make a final determiniation if the article is within the scope of the project. The primary benefit of a bot completing this task would be that it will put all the newly tagged articles into a our unassesed category autmotacally (because of the template syntax) making it easy to quickly go through them all. This will save the time of manually checking thousands of articles for banners. I expect that there are between 500-2000 articles that are not tagged with banners, out of an estimated 8-9 thousand articles in the categories. (I have not determined an easy way to count the articles). —Charles Edward (Talk | Contribs) 14:30, 9 July 2009 (UTC)
Charles Edward authorized my addendum of appropriate BOT actions H-J--TonyTheTiger (t/c/bio/WP:CHICAGO/WP:LOTM) 13:46, 19 July 2009 (UTC)
- List of categories
- (removed to lighten load on this page, please subpage this)
- Hi, what exactly do you mean by "Not all of the subcategories of Category:Indiana are listed here, as some are deemed to not be within the projects scope, so just pointing at Category:Indiana and subcats should not be done."? How else would it find the pages? Thanks. AHRtbA== Talk 20:06, 10 July 2009 (UTC)
- Hello. What I mean is that, a list was made of all of the subcategories of Category:Indiana. From that list, number of categories were removed because they are not within the scope of the project. The remaining 641 categories that are within the scope of the project are listed. Those are the categories that should be scanned for banners and tagged. That is opposed to doing a top down searching starting in Category:Indiana, which is what I meant by my comment - that some of the subcategories of Indiana are not included in the above list. —Charles Edward (Talk | Contribs) 20:53, 10 July 2009 (UTC)
- Ok. I understand now, you want the bot to use the list above. If no one already has a bot to do this, I build one after CSDify get's approved. Do you have a deadline for this? Thanks. AHRtbA== Talk 23:00, 10 July 2009 (UTC)
- No deadline particularly, but the sooner the better of course. —Charles Edward (Talk | Contribs) 00:22, 11 July 2009 (UTC)
- Ok. I understand now, you want the bot to use the list above. If no one already has a bot to do this, I build one after CSDify get's approved. Do you have a deadline for this? Thanks. AHRtbA== Talk 23:00, 10 July 2009 (UTC)
- Note that running any sort of bot over "all subcategories" is strongly discouraged; the normal process is to do exactly what Charles Edward did here. Anomie⚔ 03:31, 11 July 2009 (UTC)
- So even if I come across a Sub-Category in the list he gave, I'm not supposed to go through it? Thanks. AHRtbA== Talk 19:16, 11 July 2009 (UTC)
- If you're asking "Should I not process pages in Category:Taylor University faculty, even though that category is in Category:Taylor University which is in the list?", that should be correct. Only tag pages that are in one of categories specifically listed. Anomie⚔ 20:43, 11 July 2009 (UTC)
- So even if I come across a Sub-Category in the list he gave, I'm not supposed to go through it? Thanks. AHRtbA== Talk 19:16, 11 July 2009 (UTC)
- Hello. What I mean is that, a list was made of all of the subcategories of Category:Indiana. From that list, number of categories were removed because they are not within the scope of the project. The remaining 641 categories that are within the scope of the project are listed. Those are the categories that should be scanned for banners and tagged. That is opposed to doing a top down searching starting in Category:Indiana, which is what I meant by my comment - that some of the subcategories of Indiana are not included in the above list. —Charles Edward (Talk | Contribs) 20:53, 10 July 2009 (UTC)
- There are many projects in need of this service now that ShepBot is no longer running. You might want to make this bot general and also incorporate some of the features that I mention in the WP:CHICAGO request below.--TonyTheTiger (t/c/bio/WP:CHICAGO/WP:LOTM) 14:17, 18 July 2009 (UTC)
Ok, I think Xeno can do this task. (and is maybe doing it right now...?) @TonyTheTiger: Are there any bots that do your task in Category:WikiProject tagging bots or Category:Template substitution bots? Thanks. AHRtbA== Talk 17:56, 18 July 2009 (UTC)
- I was meaning to get to this over the weekend, but it was busier than I expected. If anyone else can take the task, please feel free otherwise I will try to get to it this week or weekend. –xenotalk 12:45, 20 July 2009 (UTC)
- Doing.... –xenotalk 18:55, 22 July 2009 (UTC)
- This is Done. 6422 edits. I pasted the 240k list of pages edited to http://en.wikipedia.org/w/index.php?title=Wikipedia:WikiProject_Indiana/Article_alerts&oldid=303930504#Pages_edited_by_tagging_bot_in_July_2009 . To see the edits themselves click here and go backwards. Or, click here for 5000 / 1422 edits. Please see User talk:Xeno#Mistagging reports, there was a few more categories that probably shouldn't be tagged. Executive summary: Category:Potawatomi (not detagged by myself, please review) and Category:Notre Dame Educational Association Philippines (already de-tagged en-masse by myself). Cheers! Feel free to let me know if you need this task run again in future. –xenotalk 13:36, 24 July 2009 (UTC)
WP:CHICAGO project tagging
Stepshep (talk · contribs) seems to be inactive. He use to run ShepBot to add {{ChicagoWikiProject}} to new articles in all the cats at WP:CHIBOTCATS that did not have any of the various redirected forms of our project tag. When he added the tag he checked to see if other project tags listed the article as a WP:GA, WP:FA, or WP:FL. If not he added the most common class that the other projects were using. I need another person to perform this task for us once a week or so.--TonyTheTiger (t/c/bio/WP:CHICAGO/WP:LOTM) 19:02, 15 July 2009 (UTC)
- Maybe he only added class if it was stub, GA, FA or FL. Not sure if he did most common actually, but maybe this could be added.--TonyTheTiger (t/c/bio/WP:CHICAGO/WP:LOTM) 14:26, 18 July 2009 (UTC)
- Xenobot Mk V is Doing... –xenotalk 05:10, 26 July 2009 (UTC)
- This is Done. Please ping me when you'd like me to run it again. –xenotalk 04:32, 31 July 2009 (UTC)
- That was quite helpful. Is there any chance the bot can check these cats weekly? Any chance the bot can check articles with no class to see if other projects have class=GA, FA or FL?--TonyTheTiger (t/c/bio/WP:CHICAGO/WP:LOTM) 21:07, 1 August 2009 (UTC)
- I can do this on a regular basis, I'm not sure if I'll have time to do it weekly. It's not quite an automated process. As for your latter question, I've asked the AWB developers for the ability to auto-tag articles as GA/FA/FL. I suppose if I took the time to think about it, I could come up with a hack, but I'd much prefer using the plugin =) –xenotalk 21:36, 1 August 2009 (UTC)
- Should I come remind you every couple weeks or will you just do it periodically?--TonyTheTiger (t/c/bio/WP:CHICAGO/WP:LOTM) 22:22, 5 August 2009 (UTC)
- Yes, come ping me at my talk page if you don't see that I've run it again by the end of August. –xenotalk 18:47, 7 August 2009 (UTC)
- Should I come remind you every couple weeks or will you just do it periodically?--TonyTheTiger (t/c/bio/WP:CHICAGO/WP:LOTM) 22:22, 5 August 2009 (UTC)
- I can do this on a regular basis, I'm not sure if I'll have time to do it weekly. It's not quite an automated process. As for your latter question, I've asked the AWB developers for the ability to auto-tag articles as GA/FA/FL. I suppose if I took the time to think about it, I could come up with a hack, but I'd much prefer using the plugin =) –xenotalk 21:36, 1 August 2009 (UTC)
- That was quite helpful. Is there any chance the bot can check these cats weekly? Any chance the bot can check articles with no class to see if other projects have class=GA, FA or FL?--TonyTheTiger (t/c/bio/WP:CHICAGO/WP:LOTM) 21:07, 1 August 2009 (UTC)
- This is Done. Please ping me when you'd like me to run it again. –xenotalk 04:32, 31 July 2009 (UTC)
- Xenobot Mk V is Doing... –xenotalk 05:10, 26 July 2009 (UTC)
Source-finding bot for WP:METAL, WP:ALTROCK, WP:ROCK
I have an idea for a bot. Three times now when I have tried to create an article on a single released by a rock/alternative band, I kept getting hindered by forum chatter, videos, or lyrics. I was thinking a bot could be created to bypass all that crud and get sources which would be actually necessary, and post the sources found on the talk pages. --Dylan620 (contribs, logs) 20:53, 7 August 2009 (UTC)
Tagging all pages with {{dab}} and variants with {{WikiProject Disambiguation}}
Should be fairly easy to do, and should run on a regular basis (daily?, weekly?) after the initial run. Headbomb {ταλκκοντριβς – WP Physics} 18:07, 5 August 2009 (UTC)
- Possible and Coding.... So it's the disam page that gets tagged, right? (Not the talk page or something similar). Someone probably already has a bot that they can modify to do this work, (and if someone can do it faster than I can build and get this bot approved, go ahead) but I want to try my hand at a bot like this. Not too many of these types of bots are active. Thanks. AHRtbA== Talk 13:43, 6 August 2009 (UTC)
- It's the talk page that should be tagged (WikiProject banners go on the talk page of the related article). The problem is, there are about 114,593 disambig pages (see Category:Disambiguation pages). So if a bot was approved to do this, it would take days and days, even running constantly. - Kingpin13 (talk) 13:53, 6 August 2009 (UTC)
- I did notice that. :) It should be fine. The bot can load 300 pages in about 15-30 seconds, so according to my calculations, it's would only take around 2.5 hours to run. (though I wouldn't run it continuously). Also, is this only running on article pages? Thanks. AHRtbA== Talk 14:33, 6 August 2009 (UTC)
- Also, does the "and variants" mean every one in and only these? Category:Disambiguation message boxes Thanks. AHRtbA== Talk 14:41, 6 August 2009 (UTC)
- I did notice that. :) It should be fine. The bot can load 300 pages in about 15-30 seconds, so according to my calculations, it's would only take around 2.5 hours to run. (though I wouldn't run it continuously). Also, is this only running on article pages? Thanks. AHRtbA== Talk 14:33, 6 August 2009 (UTC)
- It's the talk page that should be tagged (WikiProject banners go on the talk page of the related article). The problem is, there are about 114,593 disambig pages (see Category:Disambiguation pages). So if a bot was approved to do this, it would take days and days, even running constantly. - Kingpin13 (talk) 13:53, 6 August 2009 (UTC)
Yes, but the bot would only be permitted to edit once every 10 seconds, which means it would take about 30 (?) days. Also, you since the template isn't rating the article (and can't, since it's a redirect), is there actually a need to tag all of these? You already have all the dismabigs listed at Category:Disambiguation pages.
Also, in reply to AHRtbA, yes there are already bots to do this, and AWB is equipped to do it, but there aren't really enough bots approved for WikiProject tagging (I've been considering creating my own bot to do WikiProject tagging), so you go ahead and create one if you like (don't forget to request approval), even if it doesn't end up doing this particular task. And variants was when you would have been loading translucions to {{dab}}, possibly Headbomb wasn't aware of the category? (you should just tag pages in Category:Disambiguation pages rather than those transcluding {{dab}} and variants - Kingpin13 (talk) 14:51, 6 August 2009 (UTC)
- Ok. I forgot about the edit rate. It would take 14 days, but that's only on the first run. After that, it should be able to be maintained in less than 3 hours. @cat: I don't even know why I asked the last question. I guess I was reading through this, and forgot I'm already pointing my bot at Category:All disambiguation pages (is it the same thing as your cat?). Should I point at Category:All article disambiguation pages instead? (as to only do articles) Thanks. AHRtbA== Talk 15:05, 6 August 2009 (UTC)
- I don't really care how exactly the bot's inner working is achieved, I want to tag all the dabs for two purposes mostly. The first is that people see the DAB WikiProject more prominently and so are reminded that that's where they should get help for dab-related stuff if they need it, and the other is so the article alerts can be set-up in a straightfoward way. If you come across templates, categories, and so on, please tag them as well (again so the alerts can cover them). Headbomb {ταλκκοντριβς – WP Physics} 15:28, 6 August 2009 (UTC)
- #head: Ok, I'll go with the category that has all types in it. #king re:time: I did a test, and it seems that it's a busy project. The ratio of pages to be tagged is 12:100, which is pretty small. Is there a way that I can find out all the types of talk pages for all namespaces? Thanks. AHRtbA== Talk 15:33, 6 August 2009 (UTC)
- I looked around, and it appears that I also need to include Category:Disambiguation categories for categories. Thanks. AHRtbA== Talk 15:51, 6 August 2009 (UTC)
- Sure, use whatever categories or category works best. If you are still using C# then I can give you the code which I use in User:SDPatrolBot to convert pages to talk pages. Otherwise I'd say just create your own function (it's pretty simple coding) - Kingpin13 (talk) 15:59, 6 August 2009 (UTC)
- I looked around, and it appears that I also need to include Category:Disambiguation categories for categories. Thanks. AHRtbA== Talk 15:51, 6 August 2009 (UTC)
Ok. Actually, I'm writing it in PHP for when I get in TS, but I did write the code for that part. (I didn't know you just have to replace the color (:) with " Talk:") Thanks. AHRtbA== Talk 16:14, 6 August 2009 (UTC)
- Cool (does the toolserver not accept C#?), be aware that for mainnspace you just add Talk: to the beginning, for talk pages, don't do anything, and make sure you only change the first colon. Cheers :) - Kingpin13 (talk) 16:18, 6 August 2009 (UTC)
- Thanks for the note, I had all that down except the limit for the first colon... added it now. That would've been pretty bad mistake. :) Unless you know otherwise, the ToolServer is run off of a Linux based OS. Since C# is a M$ based language with .NET, they'd need to install some weird software, and then you'd have to do some weird hacks to get C# running on Linux. (I think the program is called "Wine" or something for Linux... or you could transfer the C# code over to Mono) Thanks. AHRtbA== Talk 16:36, 6 August 2009 (UTC)
- BRFA filed. See this. Also, how often should this bot run? Thanks. AHRtbA== Talk 18:14, 6 August 2009 (UTC)
- Well ideally once a day (overnight), since it would keep things up-to-date on a per-day basis, and participation in the AfD/PRODs can be maximized. I doubt the daily runs would add a significant load on the servers. But it should at the minimum run once or twice a weekm, otherwise PRODs and AfDs could get through without the DAB project knowing of them. Headbomb {ταλκκοντριβς – WP Physics} 19:29, 6 August 2009 (UTC)
Ok, Once every day/two days I think would be good. I'll set it up for that, but it won't officially be on that schedule until I get TS access. Thanks. AHRtbA== Talk 19:46, 6 August 2009 (UTC)
- Is there any other aliases of the {{WikiProject Disambiguation}}, besides {{DisambigProj}} and {{DisambigProject}}? Thanks. AHRtbA== Talk 20:23, 8 August 2009 (UTC)
- See [4]. -- JLaTondre (talk) 23:55, 8 August 2009 (UTC)
- Is there any other aliases of the {{WikiProject Disambiguation}}, besides {{DisambigProj}} and {{DisambigProject}}? Thanks. AHRtbA== Talk 20:23, 8 August 2009 (UTC)
Template:Current removal
I'm not sure if this is the right place, but it'd be awesome if there would be a bot out there that would regularly remove Template:Current and its sister templates from articles that are not a current event. Basically, Template:Current is supposed to be added to articles that are being edited rapidly due to them being a current event. The consequences of that (possibly out-dated information, possibly wrong information, possible vandalism, etc.) are something we need to warn our readers of. 2009 Hudson River mid-air collision would be a recent example. But this and similar templates are regularly added to articles that can be considered "current" by various definitions of the word, even though there's no need for a warning to our readers. So it would be nice if there'd be a bot that would remove these templates from articles that haven't been edited more than, say, 10 times in the last 24 hours or so. --Conti|✉ 13:56, 9 August 2009 (UTC)
Tagging for WP Journals
- All articles with a {{Infobox journal}}/{{Infobox Journal}} template should be tagged with {{WP Journals}}.
- All articles with {{Infobox Academic Conference}}, {{Infobox Magazine}}, {{Academic-journal-stub}},{{journal-stub}}, {{humanities-journal-stub}}, {{sci-journal-stub}}, {{biology-journal-stub}}, {{chem-journal-stub}}, {{engineering-journal-stub}}, {{med-journal-stub}}, {{physics-journal-stub}}, {{socialscience-journal-stub}} (and their redirects) should be tagged with {{WP Journals}}
- Import |class= from other banners when possible.
- Articles tagged with {{WP Journals}} that do not have the journal infobox should be tagged with |needs-infobox=yes.
- Articles tagged with |needs-infobox=yes that have the journal infobox should be untagged.
- Preferential use is {{WP Journals}}, so replace variants with that when making an edit.
- Please run once a month.
Headbomb {ταλκκοντριβς – WP Physics} 02:02, 4 August 2009 (UTC)
- Sounds reasonable. John Vandenberg (chat) 02:26, 4 August 2009 (UTC)
- BRFA filed at Wikipedia:Bots/Requests for approval/Erik9bot 11. Erik9 (talk) 03:33, 8 August 2009 (UTC)
- Coding... The first two requests--Tim1357 (talk) 22:08, 10 August 2009 (UTC)
2009–10 domestic football leagues and cups
Per the consensus at Wikipedia talk:WikiProject Football/Archive 33#Formal petition to change the naming conventions, could someone please create a bot to move any articles in Category:2009 domestic football (soccer) leagues, Category:2009 domestic football (soccer) cups, Category:2009-10 domestic football (soccer) leagues and Category:2009-10 domestic football (soccer) cups so that the years are at the beginning? For example, Serie A 2009–10 should be moved to 2009–10 Serie A, Greek Cup 2009–10 to 2009–10 Greek Cup, Allsvenskan 2009 to 2009 Allsvenskan and Finnish Cup 2009 to 2009 Finnish Cup. --Soccer-holicI hear voices in my head... 13:49, 11 August 2009 (UTC)
Can anyone fix redirects on the pages that link to "Premiere (pay television network)"? -- JSH-alive talk • cont • mail 12:37, 10 August 2009 (UTC)
"Wall of Recognized Content" Bot.
I a while ago, I created Wikipedia:WikiProject_Physics/Recognized_content, and I planned to update it every once in a while. Then today, instead of updating it, I got smart and thought, "Hey, that's a nice task for a bot!"
Basically what I do is take the intersection of the WikiProject's main category and that of say Feature articles. What is left is the Featured Articles of the project. So I lump them all in a section called "Featured article", and sort them alphabetically (people by last name, so I guess a bot would use the default sort value). And then I move on to former FAs, then GAs, then former GAs, and so on. Now this would be relatively easy to make this a bot-handled process, which can then be used by ALL wikiprojects (on a subscription basis, much like how WP:AAlerts work). The bot would get the subscription from a template such as {{WRCSubscription|ProjectCat=Physics articles}}
(the default lists all FA, FFA, GA, FGA, DYK, and so on), or if a project chooses to opt out of DYKs, then the template would look something like such as {{WRCSubscription|ProjectCat=Physics articles|DYK=no}}
. The walls could then be updated daily/weekly/whateverly.Headbomb {ταλκκοντριβς – WP Physics} 15:26, 25 July 2009 (UTC)
- This would be very useful for keeping Wikipedia:WikiProject Aviation/Featured and good content updated. - Trevor MacInnis (Contribs) 21:18, 26 July 2009 (UTC)
- Doing... I'll implement this. I already have the basics working. I just need to add using a template vs. a fixed list and updating the target page. Does it have to use DEFAULTSORT? I'd prefer not to have to read each page to retrieve that. Not that it's hard, but I'd rather avoid the load unless it's really desired. Also, while I'm willing to provide some options, to make it usable by multiple projects, there will need to be some reasonable standardization on output formats. That can be worked when I get to the point of having example output, however. -- JLaTondre (talk) 14:13, 2 August 2009 (UTC)
- Well for the defaultsort you could read it once and store the sorting information in a comment, next to the entry, which you could use as a reference point. Aka something like
<!--Driver, Mini-->*{{Icon|FA}}[[Minnie Driver]] <!--Myers, Mike-->*{{Icon|FA}}[[Mike Myers]] <!--Nickelodeon-->*{{Icon|FA}}[[Nickelodeon]] <!--Richardson, Matthew-->*{{Icon|FA}}[[Matthew Richardson]]
- So when you get an entry like Adam Sandler (sortkey Sandler, Adam), you can check the comments and place the new entry where it needs to be. For customizability, I think this would be reasonable for any WikiProject/Taskforce: Being able to tweak what recognized content is shown (aka if you don't want delisted GAs, you should be able to exclude them), and being able to choose between column style (and the number of columns) or one line per worflow list style such as in WP:FL. Headbomb {ταλκκοντριβς – WP Physics} 00:27, 4 August 2009 (UTC)
- I can store the defaultsort (which is better off being done locally than on the page), but it means any changes will be missed. I had already implemented options you suggested, but the amount of customization needs to be reasonable. Even with those, there are still other deltas in the various look-n-feel. -- JLaTondre (talk) 21:57, 7 August 2009 (UTC)
- Well my reasoning was that storing it on the page would mean that users could manually update the sorting information (which cannot be done if it is stored locally). For the customization, I really don't think anymore than those I just listed are necessary for any project. If someone wants something else, let him/her ask, and you don't think it's worth the hassle, then don't implement that request. It is after all your time. Headbomb {ταλκκοντριβς – WP Physics} 12:19, 10 August 2009 (UTC)
- I can store the defaultsort (which is better off being done locally than on the page), but it means any changes will be missed. I had already implemented options you suggested, but the amount of customization needs to be reasonable. Even with those, there are still other deltas in the various look-n-feel. -- JLaTondre (talk) 21:57, 7 August 2009 (UTC)
- So when you get an entry like Adam Sandler (sortkey Sandler, Adam), you can check the comments and place the new entry where it needs to be. For customizability, I think this would be reasonable for any WikiProject/Taskforce: Being able to tweak what recognized content is shown (aka if you don't want delisted GAs, you should be able to exclude them), and being able to choose between column style (and the number of columns) or one line per worflow list style such as in WP:FL. Headbomb {ταλκκοντριβς – WP Physics} 00:27, 4 August 2009 (UTC)
- I've got the mechanics pretty much working, but I need to clean-up the code and improve the options. I haven't had much time lately. I probably won't get a chance to look at it for any extended time for a couple of weeks. If anyone else is interested in doing this and can get it done sooner, feel free. Otherwise I will work on it again when I have more than a couple of minutes at a time. -- JLaTondre (talk) 02:04, 12 August 2009 (UTC)
Need a thank you note delivered
Not done
I need a bot to deliver a thanks to the people that participated in my recent admin nomination discussion. --Jeremy (blah blah • I did it!) 05:33, 4 August 2009 (UTC)
- I believe most user's do this manually, to avoid sending messages to users who didn't actually !vote. You may want to ask a user who as done it before how they did it. One alternative would be to use AWB, but again, with this you risk delivering to a user who didn't !vote. I suggest simply copy 'n' pasting, using tabs. If you decide to use AWB, then fill a list from all links on the RfA page, filer to only include user-space, convert to talk pages, removed doubles, append your message, - Kingpin13 (talk) 10:32, 4 August 2009 (UTC)
- Not to mention automated delivery is pretty impersonal. –xenotalk 18:49, 7 August 2009 (UTC)
Encyclopedia of Life (EOL) article list bot.
I made a suggestion here: [5] that described the need for a bot in order to get a list of articles that EOL has and Wikipedia doesn't have (the list would be useful since it would be easy to access the missing material from EOL and then create articles that way since the license is compatible). Would anyone with the bot making capacity be able to assist in either helping user:Bob the wikipedian with making the bot or make it yourself. Note that this is only to create the list of articles (with the whole [[ ]] between the names, all in a list format etc.) and not to create the actual articles themselves (those will be created by humans!). Cheers! Calaka (talk) 13:14, 7 August 2009 (UTC)
- This wouldn't be hard to do if we had a list of articles on the eol site. I only looked around briefly, but didn't see an easy way to extract that list. (Haven't read the previous discussions, is it there somewhere?) –xenotalk 03:54, 8 August 2009 (UTC)
- That was exactly my frustration! I tried to look around the website but was unable to find a list anywhere! I figured if I found a list (which I admit would be very large if it listed every single species in a phylum for example) I would have been willing to copy/paste it myself (and add the [[ ]] myself) but yeah. Perhaps User:Bob the wikipedian knows of a way of finding this list? Calaka (talk) 16:28, 8 August 2009 (UTC)
- I'll take a look at the website and see if I can locate any such resource. Bob the Wikipedian (talk • contribs) 02:56, 10 August 2009 (UTC)
- After looking around, I imagine it's stored on a server we don't (or aren't supposed to) have access to and will respect that This bot would need some sort of means to actually navigate like a googlebot (crawler). Bob the Wikipedian (talk • contribs) 06:56, 10 August 2009 (UTC)
- Hmmm but assuming it is legal, would it be an easy process for the bots to crwal through their servers and get the lists down on a text file etc.? Anyway if it is not legal then we can surely ask them where they got the lists from (there must be some place that has a public domain listing of all the species or something?) but as I said over at the missing encyclopedia talk page: that is assuming they will be willing to help us out hehe. Calaka (talk) 09:47, 11 August 2009 (UTC)
- Actually, there is no such list. Currently, to the best of my knowledge, EOL is the only project that is still trying to come up with a complete list of all biota ever known to man. The other projects seem to have given up on it, since it's such a difficult assignment. EOL was launched a couple years ago and has a plan to at least catch up with biology within around ten or so years, but how they do that is a big question.
- There are millions of species, and biologists often take the liberty of declaring something a new species when it is not, either out of confusion or because they don't like the classification given to it already. There are probably millions of dubious names, and since there's never been a comprehensive list (I don't think even Linnaeus attempted it), it's difficult to say exactly how many species of beetles there are, for instance.
- Recently I worked on a project to create a stub for every valid species of Nicrophorus, which is a genus of dung beetle. I created over 100 stubs and over 200 redirects. I used a single resource which was hundreds of pages long. It was a paper written specifically to try and determine exactly what the valid species were, and basically was a compilation of every paper ever written on a Nicrophorus beetle. There were hundreds if not thousands of papers referenced in that paper. It would take an eternity for anyone to analyze every paper written and determine which ones were valid and which ones were nomen dubium.
- There are a few organizations contributing knowledge, among them the Smithsonian and some other respected museums. I'm glad to see that the EOL doesn't stop and write an article for each species, otherwise, they'd never catch up with modern biology.
- That having been said, I think I've driven my point into the ground, as I tend to do. Sorry if I rant too much.
- Anyway, since EOL is working so hard to make this incredible dream a reality, we probably ought to respect that and possibly even help them out rather than acting as parasites. After all...once another new species is described, which database will be the first to be up-to-date? Eventually one will fall behind if people take sides. I think it's time for biologists to unite and use a single database. I just wish EOL was easier to navigate and access. But perhaps that will change within the next few years...I've seen it improve some already. Bob the Wikipedian (talk • contribs) 17:14, 11 August 2009 (UTC)
- Hehe no worries. Thanks for the explanation. Darn, I didn't realize it would be so complicated. It is a shame that there isn't such a large master list out there, but I guess if EOL continues to grow perhaps it would be the place where Wikipedia can get all of its taxonomy articles from (in addition to other sources). Ah well, thanks again and keep up the great work. Calaka (talk) 12:24, 12 August 2009 (UTC)
- Hmmm but assuming it is legal, would it be an easy process for the bots to crwal through their servers and get the lists down on a text file etc.? Anyway if it is not legal then we can surely ask them where they got the lists from (there must be some place that has a public domain listing of all the species or something?) but as I said over at the missing encyclopedia talk page: that is assuming they will be willing to help us out hehe. Calaka (talk) 09:47, 11 August 2009 (UTC)
- After looking around, I imagine it's stored on a server we don't (or aren't supposed to) have access to and will respect that This bot would need some sort of means to actually navigate like a googlebot (crawler). Bob the Wikipedian (talk • contribs) 06:56, 10 August 2009 (UTC)
- I'll take a look at the website and see if I can locate any such resource. Bob the Wikipedian (talk • contribs) 02:56, 10 August 2009 (UTC)
- That was exactly my frustration! I tried to look around the website but was unable to find a list anywhere! I figured if I found a list (which I admit would be very large if it listed every single species in a phylum for example) I would have been willing to copy/paste it myself (and add the [[ ]] myself) but yeah. Perhaps User:Bob the wikipedian knows of a way of finding this list? Calaka (talk) 16:28, 8 August 2009 (UTC)
Date formats
Please could someone make a bot which would determine which of the two permitted date formats is predominant in an article (either dd mmmm yyyy or mmmm dd, yyyy), and then converts all the other dates in that article to that format. This would save a vast amount of editing time. Thank you! Alarics (talk) 19:15, 10 August 2009 (UTC)
- Are you going to run and manage such a bot, then? I'll get the eggs and rotten tomatoes ready... quite apart from still being proscribed by ArbCom, trying to automate a process as controversial as this would be madness. (also)Happy‑melon 19:29, 10 August 2009 (UTC)
- Why should it be so controversial? It would only be automating what is already policy, which is to standardise the date format within each article on whatever format is already the predominant one in that article. Alarics (talk) 21:13, 10 August 2009 (UTC)
- To clarify: One would of course only run it on pages where it was judged appropriate. I just need a way to automate standardising the date format in one particular article at a time. There are loads of articles where the date formats are all over the place and it looks bad. In very many cases there would be no controversy about it at all. Can someone tell me if it is technically feasible? Alarics (talk) 19:15, 11 August 2009 (UTC)
- Lightmouse has a script and a bot and been banned for 1 year by the Arbitration Committee. — Dispenser 00:55, 12 August 2009 (UTC)
- Simply standardizing the date formats (without delinking or linking) would not violate any AC restrictions as far as I know. –xenotalk 13:54, 12 August 2009 (UTC)
- Lightmouse has a script and a bot and been banned for 1 year by the Arbitration Committee. — Dispenser 00:55, 12 August 2009 (UTC)
- To clarify: One would of course only run it on pages where it was judged appropriate. I just need a way to automate standardising the date format in one particular article at a time. There are loads of articles where the date formats are all over the place and it looks bad. In very many cases there would be no controversy about it at all. Can someone tell me if it is technically feasible? Alarics (talk) 19:15, 11 August 2009 (UTC)
- Why should it be so controversial? It would only be automating what is already policy, which is to standardise the date format within each article on whatever format is already the predominant one in that article. Alarics (talk) 21:13, 10 August 2009 (UTC)
Codex Vaticanus
Bonjour,
By renaming, Biblical « Codex Vaticanus » (B, 03) is now « Codex Vaticanus Graecus 1209 ». Can a bot modify each internal link « [[Codex Vaticanus]] » or « [[Codex Vaticanus|B]] » into « [[Codex Vaticanus Graecus 1209|Codex Vaticanus]] » and « [[Codex Vaticanus Graecus 1209|B]] » from this list ? :
- Biblical canon
- Biblical manuscript
- Category talk:Bible versions and translations
- Christianity in the 4th century
- Codex Athous Lavrensis
- Codex Dublinensis
- Codex Regius (New Testament)
- Codex Zacynthius
- Gospel of Mark
- List of codices
- Portal:Religion/Selected scripture
- Portal:Religion/Selected scripture/22
- Septuagint
- Talk:Codex Sinaiticus
- Talk:Dead Sea scrolls
- Talk:First Council of Nicaea
- Talk:Gospel of Luke
- Talk:Gospel of Mark
- Talk:Gospel of Mark/Archive 1
- Talk:Internal consistency of the Bible
- Talk:Sayings of Jesus on the cross
- Textual criticism
- Textual variants in the New Testament
Thanks,
Budelberger ( ) 13:51, 12 August 2009 (UTC).
Creating the Discussion page of an article and populating it with a template
Hi,
I am an active user of Hindi Wikipedia (hi)(My user page at hi wiki ). We at Hindi Wikipedia are very short on the number of contributors. Also none of our editors have capability/time to create a bot which I am requesting. So here is the description of the job I wish the new bot to do.
1. It will search for all the articles which do not have the "Discussion" page.
2. It will then create the Discussion page and populate it with a template.
3. This template is a general information template similar to this english wiki template {{Talkheader}}
4. The Bot would invoke this task periodicaly (say, every week)
We at hindi wiki would highly appreciate help of any sort.
Thanks,
Regards,
Gunjan (talk) 12:26, 13 August 2009 (UTC)
- Answered on user's talk page. Rich Farmbrough, 05:55, 23 August 2009 (UTC).
Tidying up User:Example
User:Example is a user account used in various places as an example, or as a placeholder that's guaranteed not to exist as a real user. While the account does have legitimate subpages, most new subpage creations are by inexperienced users who were trying to create a sandbox in their own userspace and got lost. Is there any bot that could watch for these, move them to the creator's own userspace, and leave them a nice note about how to find the proper location? — Gavia immer (talk) 16:43, 14 August 2009 (UTC)
Removing deprecated template Wikicite
Template wikicite is deprecated. (The functionality of Wikicite has been completely subsumed by the Cite* family of templates). There are now a few hundred articles in which the template call to Wikicite serves no purpose. Many of these fall into one class:
* {{Wikicite | id=Johnson-2000 | reference ={{Cite book | last=Johnson | first= ... }}}}
These can be replaced with
* {{Cite book | last=Johnson | first= ... }}
I think a regular expression based bot could fix these no problem. The edit summary could read
Removing unnecessary template {{tl|Wikicite}}
Thanks. See also notes at Template Talk:Wikiref. ---- CharlesGillingham (talk) 15:27, 15 August 2009 (UTC)
PUI date categories
Following a discussion on the village pump, {{Pui}} was changed to place images into categories based on dates (similar to how the proposed deletion category hierarchy is laid out). Only problem is, these categories need to be created. I'm not sure how PROD cats are created. I guess DumbBOT (BRFA · contribs · actions log · block log · flag log · user rights) creates them. Would it be possible to have PUI cats created in the same way PROD cats are? Protonk (talk) 08:55, 18 August 2009 (UTC)
- have you poked User:Tizio? βcommand 13:01, 18 August 2009 (UTC)
- I have now. Protonk (talk) 16:52, 18 August 2009 (UTC)
Former WPFF Article
Could a bot remove every instance of {{Former WPFF Article}}? See Wikipedia:Templates for deletion/Log/2009 August 3#Template:FFCOTF candidate. I don't feel like making 200 AWB edits. -- King of ♥ ♦ ♣ ♠ 17:13, 12 August 2009 (UTC)
- And the redirect page {{FormerFinalFantasyProject}}. Plastikspork (talk) 17:22, 12 August 2009 (UTC)
- Done The few pages that remain with this template concern the template directly, and therefore should keep the template--Tim1357 (talk) 17:25, 18 August 2009 (UTC)
Add clade into taxobox of some gastropods
There is a need to add "clade Heterobranchia" in every article that contains "informal group Pulmonata" but does not already mention Heterobranchia in its taxobox. Like this: http://en.wikipedia.org/w/index.php?title=Physella&diff=308737416&oldid=301400813 Such articles can be found within the broad category Category:Gastropod families. --Snek01 (talk) 20:26, 18 August 2009 (UTC) Prose in this request was tweaked by Invertzoo (talk) 21:27, 18 August 2009 (UTC)
- Done-- I ran a wiki search for
informal group Pulmonata
and added the clade to all instancesTim1357 (talk) 22:54, 18 August 2009 (UTC)
Proposal:Spider popular project page interwikis
From http://strategy.wikimedia.org/wiki/Proposal:Spider_popular_project_page_interwikis
Please check interwikis to other language versions on important project pages, because someone on IRC during the strategy meeting said that this particular set of interwikis is in bad shape.
Make sure that correct interwikis to and from the Village Pumps, the Help Desks, the Reference Desk, everything else on http://en.wikipedia.org/wiki/Wikipedia:Questions and the project pages which appear on stats.grok.se/en/top (listed below) exist and return good pages with correct titles.
- http://en.wikipedia.org/wiki/Wikipedia:About
- http://en.wikipedia.org/wiki/Wikipedia:Citation_needed
- http://en.wikipedia.org/wiki/Wikipedia:Community_portal
- http://en.wikipedia.org/wiki/Wikipedia:Contact_us
- http://en.wikipedia.org/wiki/Wikipedia:Searching
- http://en.wikipedia.org/wiki/Wikipedia:Copyrights
We need a report to show the state of those interwikis, so that problems can be addressed now and if they crop up, perhaps checking weekly or monthly if it's not too much load?
For those that don't exist or return bad page status values when accessed, or which have no page title, can those issues be fixed with a bot?
Please check the wiktionaries too. Thank you! 99.60.0.22 (talk) 04:24, 19 August 2009 (UTC)
Request for assisting in adding categories
I am doing edits on Katipunan and {{Katipunan}}
template recently. I am requesting for bot assistance to add the following categories to the people mentioned on the Katipunan article and template {{Katipunan}}
:
On the other hand, for the objects mentioned in Katipunan, please add:
Thanks for the assisting bot.--JL 09Talk to me! 06:45, 19 August 2009 (UTC)
Dealing with: Category:Non-free Wikipedia file size reduction request
There is a large backlog of images that need their size reduced. I propose that this hypothetical bot reduces the image's resolution by a set ammount (perhaps 25%). --Tim1357 (talk) 23:23, 17 August 2009 (UTC)
- Not a good task for a bot. This has been proposed before, and has always been rejected on the grounds that a bot can't make the necessary quality assessment to determine if the reduced version is acceptable. Anomie⚔ 16:42, 19 August 2009 (UTC)
Replace transclusions of userboxes
There are several userboxes which have recently been moved from the template namespace to the user namespace per the userbox migration. All transclusions of these userboxes as templates must be replaced with their new locations in userspace. They are listed at Category:Wikipedia GUS userboxes. 95j || talk 22:10, 24 August 2009 (UTC)
WikiProject Zoroastrianism template
Hello,
Can a bot take all the articles in the Category:Zoroastrianism category and all it's subcategories, as well as Category:Parsis and place a {{WikiProject Zoroastrianism}} template on the pages that do not have this template? Warrior4321 17:37, 21 August 2009 (UTC)
- Completed. Warrior4321 14:00, 26 August 2009 (UTC)
infobox settlement, incorrect flag usage resulting in thousands of transclusions for every hit.
Wikipedia:VPT#Cascading_template_transclusions, until it is archived.
Any usage of the infobox settlement template that uses a flag in the field "subdivision_name" is causing the template to load EVERY flag everytime the template loads, and makes it impossible to use What Links Here to see flag template usage. Can someone fix this by making a bot that replaces the flag template usage with the linked country name and a hidden comment that that field should not contain flags?
Look at the What Links Here for Template:MEX as a start to see the extent of the problem.SchmuckyTheCat (talk) 20:21, 24 August 2009 (UTC)
- Wouldn't it be better to fix {{CountryAbbr}} / {{CountryAbbr2}} than to edit thousands infoboxes to avoid triggering the poor design choice in those templates? Anomie⚔ 02:04, 25 August 2009 (UTC)
- I would think so. Having said that the flags are limited value in infoboxes, and the shortcuts not too readable. It might well be a good backgorund task to replace {{MEX}} with
{{Flagicon Mexico}}
(or "Mexico" where that is consensual). —Preceding unsigned comment added by Rich Farmbrough (talk • contribs) 16:51, 26 August 2009 (UTC)
- I would think so. Having said that the flags are limited value in infoboxes, and the shortcuts not too readable. It might well be a good backgorund task to replace {{MEX}} with
Proxy blocking bot
User:ProcseeBot seems to be inactive, and I was wondering if anyone would be able to create a clone, for use on Simple English Wikipedia. Thanks, Majorly talk 18:17, 21 August 2009 (UTC)
- I saw something about this on IRC. You're probably much better off fighting behavior rather than fighting proxies (i.e., have you tried the AbuseFilter?). ProcseeBot is closed source, so a clone may be possible, but would require a rewrite from scratch. As far as I'm aware. Has someone left a note on slakr's talk page or tried e-mailing him? --MZMcBride (talk) 19:35, 21 August 2009 (UTC)
- Over the past couple of weeks, I've attempted to engage him in IRC, on his talk, and in the Special:Emailuser. I've had no success thus far. NonvocalScream (talk) 20:21, 21 August 2009 (UTC)
- I would be able to write a clone of it, but as I'm not an admin on simple, couldn't run it. (X! · talk) · @359 · 07:37, 23 August 2009 (UTC)
- That would be good. I can run perl, python. Anything else, I can learn to run. I thank you for your help. NonvocalScream (talk) 22:51, 26 August 2009 (UTC)
- I would be able to write a clone of it, but as I'm not an admin on simple, couldn't run it. (X! · talk) · @359 · 07:37, 23 August 2009 (UTC)
- Over the past couple of weeks, I've attempted to engage him in IRC, on his talk, and in the Special:Emailuser. I've had no success thus far. NonvocalScream (talk) 20:21, 21 August 2009 (UTC)
Archiving this page
Either
- Add a permalink to the version that was archived from int he edit summary
Or
- Move to a sub-page system
becasue the archives have no history which makes life difficult. Rich Farmbrough, 15:10, 23 August 2009 (UTC).
- Could you elucidate? I'm not sure I follow. –xenotalk 16:58, 26 August 2009 (UTC)
Add project banner to Category:Synapsid stubs
.
Could someone add:
{{talkheader}} {{WikiProject Palaeontology|class=stub|importance=low}}
...to the talk pages of the articles in Category:Synapsid stubs that need it? Abyssal (talk) 14:19, 26 August 2009 (UTC)
- Doing...--Tim1357 (talk) 02:39, 27 August 2009 (UTC)
- All the pages in that category have paleontology banners. --Tim1357 (talk) 03:03, 27 August 2009 (UTC)
- Thanks! Abyssal (talk) 15:16, 27 August 2009 (UTC)
- All the pages in that category have paleontology banners. --Tim1357 (talk) 03:03, 27 August 2009 (UTC)
Link rot
Ok, so i hate it when people start discussions that have already exitsted, so please tell me if there is anything like this that i can research.
I want someone to help me build a bot that fixes dead links. The bot would work like this. It would find the said dead link from Wikipedia:Dead external links and look for them in the internet archive [archive.org]. Then, it would change the link to the newly found web page. If you have not used archive.org, it works like this: take the url of any dead link. Then simple paste http://web.archive.org/web/ in front of it. Then press enter. The archive searches for the most recent back up of the page, and then it produces it. All one must do is replace the dead link in the article with the new webarchive url. Please try this yourself. Try any of the following links here Wikipedia:Dead external links/404/a and it will work. I have NO programing experience, so please help me with this. Tim1357 (talk) 03:07, 29 August 2009 (UTC)
- Let me clarify, the intenet is big, and there are holes in the archive's backup, so the theoretical bot would have to know when to skip when the archive displays "not in archive" Tim1357 (talk) 03:12, 29 August 2009 (UTC)
- I don't think that this is a good idea; dead links are a huge problem, but I don't think that a simple bot-fix is a good solution. Human intervention is required, to see if a better link can be made. Just IMHO, obv. Chzz ► 04:00, 29 August 2009 (UTC)
- I have considered doing something like this from time to time, but it is considerably more complicated than you might think. There are issues making sure the dead link isn't the result of vandalism, figuring out what the person who added the link was actually looking at, determining if there is a usable live version at a different URL, and determining which version to pick from internet archive if that is the best solution. All possible for a bot to do, IMO, but certainly not easy to program right. --ThaddeusB (talk) 05:17, 29 August 2009 (UTC)
- I'm sure it is difficult, since you would know. But, cant it look at the retrieved on date, and if there's an archive from the same date, use it? While not perfect, it might even use it if it's within a month or something. - Peregrine Fisher (talk) (contribs) 05:34, 29 August 2009 (UTC)
- Yes, if there is one, otherwise it would have to try & find the revision it was added using the history. However, the most difficult problem is determining if it can be replaced by a live link or not. A dead link is due to the page moving as often as it is due to it disappearing entirely.
- It just occurred to be that I also haven't checked archive.org's robot policy/terms of service. One would need to ensure they allow automated retrieval of content before even considering the project. --ThaddeusB (talk) 05:43, 29 August 2009 (UTC)
- ThaddeusB, I will look at their policy/post it the question on their cite to see if automated retreval of content is allowed. Im not sure what the big deal is over weather or not the page has moved or not. If it has, then it has, but the web-archive version will be the same exact thing, wont it? It actually helps because it keeps the source from changing. Tim1357 (talk) 11:09, 29 August 2009 (UTC)
- And, the dead links that are vandalism wont produce results from the internet archive, and therefore will be skipped over for humans to deal with. Tim1357 (talk) 14:39, 29 August 2009 (UTC)
- ThaddeusB, I will look at their policy/post it the question on their cite to see if automated retreval of content is allowed. Im not sure what the big deal is over weather or not the page has moved or not. If it has, then it has, but the web-archive version will be the same exact thing, wont it? It actually helps because it keeps the source from changing. Tim1357 (talk) 11:09, 29 August 2009 (UTC)
- I'm sure it is difficult, since you would know. But, cant it look at the retrieved on date, and if there's an archive from the same date, use it? While not perfect, it might even use it if it's within a month or something. - Peregrine Fisher (talk) (contribs) 05:34, 29 August 2009 (UTC)
noinclude category on POTD templates
I need a bot to noinclude categories on the POTD templates. Basically make this edit to all the subtemplates of Template:POTD. You can see Template:POTD protected/2007-08-08 for the breakage that this is causing. — RockMFR 23:07, 29 August 2009 (UTC)
Adding templates in contemporary music composer's pages
Hi fellas,
This is a task I tried to carry out with my own bot, but I had some issues programming it so I need your help. Basically, I think it would be interesting to add the CMOnline template and the BrahmsOnline template to the External Links section of contemporary classical music composers.
The first template links to exceprts from sound archives. There is a web service (SOAP method) that can tell the bot if there is content to be linked about a composer.
The second one links to a bio. I will provide a list of composers for which bios are available on the linked side, and the corresponding link in an array.
So basically the task would be:
- parsing categories like "Contemporary music"
- for each composer name, check out if a link is to be added, and add it if it has to.
I can only think of this as a benefit for the encylopedia. Regards, --LeMiklos (talk) 10:04, 24 August 2009 (UTC)
- The task isn't hard but itwould need consensus to add that many externa linksI expect. Rich Farmbrough, 00:49, 25 August 2009 (UTC).
- Ok. It's about approximately 500-1000 pages for the CMOnline template (sound excerpts), it's hard to give a precise figure but that's it, more ore less. And 200 for BrahmsOnline (bios), this one's more precise.--LeMiklos (talk) 11:59, 25 August 2009 (UTC)
- I can do it, when we reach consensus. However, where is a category that I could find these contemporary classical music composers?--Tim1357 (talk) 17:17, 27 August 2009 (UTC)
- Hi. It's note exactly a category, but this and this should do it. By the way, I did the job for the BrahmsOnline template manually as there were few of them. There's more work to do with the CMOnline template. Just for information, the task was approved on the French Wikipedia, even though I had to do it manually as I found nobody to run a bot. Basically, the advantage of this external link is to make available recordings that will be protected by copyright for the next 100 years (because they're recent recordings).--LeMiklos (talk) 11:38, 31 August 2009 (UTC)
- I can do it, when we reach consensus. However, where is a category that I could find these contemporary classical music composers?--Tim1357 (talk) 17:17, 27 August 2009 (UTC)
- Ok. It's about approximately 500-1000 pages for the CMOnline template (sound excerpts), it's hard to give a precise figure but that's it, more ore less. And 200 for BrahmsOnline (bios), this one's more precise.--LeMiklos (talk) 11:59, 25 August 2009 (UTC)
Creating links to floodplain forest
There are over a hundred such links that can be created (search gives 519 hits). Changes should be from \[\[floodplain]] forest(s?) and floodplain forest(s?) to [[floodplain forest]]\1. I'm working on a translation from it:Foresta inondata. Balabiot (talk) 13:57, 30 August 2009 (UTC)
Species
This would be a user assisted bot.
The bot would operate on the thousands of genus stub pages that exist on wikipedia. The bot would load the genus page, and assess weather or not the page had a species section. if not, then it would go to the corresponding (raw) page on wikispecies. Then, it would copy all information between the end of the colon on Species: to the first equals mark. Then, it would go back to the wikipedia article, create a new ==species== heading, and paste the species there. I know this is possible, i just lack the know-how to build it. Tim1357 (talk) 04:04, 31 August 2009 (UTC)
Links to the English version of Geopedia.si
Since June 2009, the Slovenian geographic information system geopedia.si is also available in English. Could someone please run a bot and update links to the English version of geopedia.si like it has been done for Log pod Mangartom (diff) or Arčoni (diff)? The articles to be corrected are mainly located in Category:Cities, towns and villages in Slovenia. Thanks a lot. --Eleassar my talk 21:08, 1 September 2009 (UTC)
Page size changes
I'm not entirely sure this is a bot request or can be done another way...but....
Basically, what I'd like to do is keep track of changes in page size for all articles tagged with {{WikiProject Pharmacology}} (or that exists in a subcat of Category:Pharmacology articles by quality, whichever's easier) to have an idea of when an article might need its quality assessment changed. This would be done in 2 phases: one massive dump at first, and then an analysis every month or so. The output would be the article's name, the date it was last tagged, the size of the article then, the size of the article at runtime, and the change in size.
The first phase would scan ALL the articles, get their histories, find when they were assessed, compare page sizes, and so on. After that, it would only need to be done on articles updated within the last month (or whatever the frequency we choose is).
The actual criteria are more complex but, basically, it would let us know if there's an article assessed as a stub that is 20KB!
I'd also like it to dump a list of articles over 25KB as candidates for being split.
None of this would involve any editing at all.
Can this be done? I've been searching around a lot, but I can't find any tools or bots that work based on page size!
Thanks, Skittleys (talk) 19:27, 1 September 2009 (UTC)
- This seems like something that a database report (SQL query) could do fairly easily. –xenotalk 19:30, 1 September 2009 (UTC)
- Except I don't have the extra TB of space that that'll require (or so it seems, based on what I've read).... 21:58, 1 September 2009 (UTC)
- I think he ment on the Toolserver. Here's an example:
mysql> SELECT MIN(rev_len), MAX(rev_len), AVG(rev_len), STD(rev_len)
-> FROM page
-> JOIN revision on page_id = rev_page
-> WHERE /* Specify the page */
-> page_namespace = 0 AND page_title = "Tourette_syndrome"
-> AND /* Date range */ /* From now till 1 Jan 2009 */
-> "20090101004415" < rev_timestamp;
-------------- -------------- -------------- --------------
| MIN(rev_len) | MAX(rev_len) | AVG(rev_len) | STD(rev_len) |
-------------- -------------- -------------- --------------
| 68345 | 72617 | 70322.1774 | 689.5260 |
-------------- -------------- -------------- --------------
1 row in set (0.01 sec)
- Of course the problem is with the pollution of the data set from vandalism. — Dispenser 22:38, 2 September 2009 (UTC)
Replacing instances of Template:Importance
Rich Farmbrough, 02:57, 3 September 2009 (UTC).
Would this be the right place to make a request for a bot to replace about 1500 translusions of Template:Importance with Template:Notability?
A few points:
- Following a discussion on the talk page, Template:Importance has been a redirect to Template:Notability since May, with no queries or complaints so far.
- Template:Notability seems to be a good name for the template.
- Our 19th most widely used template, with 1.7 million transclusions is Template:Impor. This is a strange name, chosen only because Template:Importance was in use. I would like to move it Template:Importance when the replacements have been made.
Many thanks in advance for your attention, — Martin (MSGJ · talk) 13:33, 26 August 2009 (UTC)
- This seems like a good task for SmackBot, as it could do other dating work while it's there... –xenotalk 16:55, 26 August 2009 (UTC)
- Good idea. I've asked Rich Farmbrough to comment here. — Martin (MSGJ · talk) 12:30, 2 September 2009 (UTC)
Actually SamckBot does this when it comes across the template. I will look into it. Rich Farmbrough, 12:36, 2 September 2009 (UTC).
Webcite bot
I'm not sure if this has been proposed here before, but at WebCite FAQ, they have a suggestion that a wikipedia bot be devolped to provide archive links to urls.
develop a wikipedia bot which scans new wikipedia articles for cited URLs, submits an archiving request to WebCite®, and then adds a link to the archived URL behind the cited URL
This seems like a feasible idea.Smallman12q (talk) 12:18, 2 September 2009 (UTC)
- Yup, definitely feasible. - Jarry1250 [ In the UK? Sign the petition! ] 12:22, 2 September 2009 (UTC)
- One exsists already=P...well it doesn't seem active. Ideally, it should be implemented site wide. Perhaps do a dump weekly and see which sites are still added.Smallman12q (talk) 15:18, 2 September 2009 (UTC)
- Checklinks has a facility to auto archive links when they meet certain conditions (currently: PDFs from Featured Articles). The problem is more or less the junk, many references aren't reliable sources and it doesn't make sense to archive them. And before suggesting white-listing domains from featured articles, I've found that list to be poor. Unfortunately, I'm tied up maintaining new tools and real life to rewrite Checklinks, but I'll look into expanding the criteria to auto archival. — Dispenser 22:01, 2 September 2009 (UTC)
- Something is better than nothing...its no fun when a good link "rots".Smallman12q (talk) 00:18, 4 September 2009 (UTC)
- Checklinks has a facility to auto archive links when they meet certain conditions (currently: PDFs from Featured Articles). The problem is more or less the junk, many references aren't reliable sources and it doesn't make sense to archive them. And before suggesting white-listing domains from featured articles, I've found that list to be poor. Unfortunately, I'm tied up maintaining new tools and real life to rewrite Checklinks, but I'll look into expanding the criteria to auto archival. — Dispenser 22:01, 2 September 2009 (UTC)
- One exsists already=P...well it doesn't seem active. Ideally, it should be implemented site wide. Perhaps do a dump weekly and see which sites are still added.Smallman12q (talk) 15:18, 2 September 2009 (UTC)
- WebCiteBOT is active, it has just been down the last week and half or so due to connectivity problems on my end. I restarted it this morning; it hasn't made any edits yet today because it has been in the archiving links phase all day. --ThaddeusB (talk) 02:35, 4 September 2009 (UTC)
Converting bare URLs into proper citations
There are bots already doing it. But can someone run it for articles transcluding {{Cleanup-link rot}} and then remove the template from the articles? Thanks. -- Magioladitis (talk) 00:18, 30 August 2009 (UTC)
- I'm not sure it's the same. Link rot is URLS that no longer work as expected, making them into refs doesn't fix that. Rich Farmbrough, 05:24, 3 September 2009 (UTC).
- As far as I understand the idea is that if we have information of the content recorder (author, date, publisher, etc.) then we can track url changes and reduce link rot. -- Magioladitis (talk) 15:53, 4 September 2009 (UTC)
Smush.it bot
Could a bot be written to use Yahoo's Smush.it to optimize(in a lossless way) the most heavily used images?Smallman12q (talk) 17:01, 2 September 2009 (UTC)
- There have been bots in the past that have run pngcrush and OptiPNG to reduce the server side image size. However, these bots are not effective at reducing bandwidth for the end user since most of the images are scaled using the image daemon. The image daemon sometimes produces larger files than the input and always includes a full alpha channel. Its a better idea to improve the image daemon to optimizes file sizes. — Dispenser 15:01, 5 September 2009 (UTC)
Dealing with: Category:Non-free Wikipedia file size reduction request
There is a large backlog of images that need their size reduced. I propose that this hypothetical bot reduces the image's resolution by a set ammount (perhaps 25%). --Tim1357 (talk) 23:23, 17 August 2009 (UTC)
- Not a good task for a bot. This has been proposed before, and has always been rejected on the grounds that a bot can't make the necessary quality assessment to determine if the reduced version is acceptable. Anomie⚔ 16:42, 19 August 2009 (UTC)
- I brought this discussuion back from the archives because i had an idea. What about doing things more specificaly. For example, images that are to be used in an album article's inbox, then they should un conditionally be 300px or less. So, the list of images to be recised would be an intersection of these three things.
- Article is in Category:Non-free Wikipedia file size reduction request
- Article is in Category:Album covers
- Image is greater then 300px.
all these images can then be resized to 300px.
Note: this process can be then applied to a number of other uses, such as film posters, dvd covers, and video game covers.
Tim1357 (talk) 22:08, 3 September 2009 (UTC)
- This still has the same issue Anomie brought up; image reduction algorithms are not always the best, and can leave a very poor-quality image, especially in cases where there is a lot of fine detail on the album cover. Yes, fair-use images are supposed to be low-quality, but not to the point where the image doesn't represent the subject well. Image reductions need to be limited to manual processing. Hersfold (t/a/c) 17:23, 6 September 2009 (UTC)
Getting a count on lists
Not sure this is bot work per se but asking here is certainly my best shot at getting this done. Could someone give me a rough count on the number of lists on the en.wiki? There are currently 1500 featured lists and I was wondering how that compared to the total. Better yet, is there something on toolserver that allows counting the number of unique articles in a category and its subcategories? Thanks, Pichpich (talk) 21:08, 3 September 2009 (UTC)
- I just saw something saying Google finds 165000 lists on WP.. ah, here we are .. Template talk:Cleanup-list. A database dump scan shows 55,3030 articles with "list" or "lists" in their titles. Rich Farmbrough, 20:46, 6 September 2009 (UTC).
Link Change database Municipality atlas Netherlands 1868
Hi, The database Municipality Atlas Netherlands ca. 1868 (Kuyper Atlas) has been moved. That's why 500 external links on the English language wikipedia became dead links. It will take a long time to update them one by one, so I was wondering if a Bot could be helpfull to take this action. I dont know much about the technics, so maybe sombody could help me out.
example http://en.wikipedia.org/wiki/Vlodrop The external link * Map of the former municipality in 1868 * The old (dead) link is http://www.kuijsten.de/atlas/li/vlodrop.html The new link is http://www.atlas1868.nl/li/vlodrop.html
The directory structure is still the same, so if the string "kuijsten.de/atlas" can be changed in "atlas1868.nl" on all the 500 wikipedia pages, the links will be alive alive again!
Regards, Quarium, The Netherlands --Quarium (talk) 20:06, 4 September 2009 (UTC)
- Best is to email them asking them to fix the problem. Wikipedia users shouldn't be the maintenance crew for lazy admins. Not to mention that this problem affects more than just us. — Dispenser 14:20, 5 September 2009 (UTC)
Easy to fix,fixing. Rich Farmbrough, 04:00, 7 September 2009 (UTC).
- Thanks for helping out Rich--Quarium (talk) 08:15, 7 September 2009 (UTC)
- Too bad I didn't see this request earlier... I have been replacing these external links with the {{Kuyper}} template, in case the URL changes again. That's not necessary now, of course, as all links work again now. -- Eugène van der Pijll (talk) 12:15, 7 September 2009 (UTC)
Change of access date for NZ rivers
In August, a large batch of stub articles were created by bot for New Zealand rivers, using information from the LINZ website. They are all marked with {{LINZ}}, showing "accessdate=07/12/09". In New Zealand, that means December 7th. A bot would be useful to change all articles in Category:Rivers of New Zealand which have that access date on the LINZ template to read "12/07/09" (or "12 AugustJuly 2009", for that matter). Cheers, Grutness...wha? 22:44, 6 September 2009 (UTC)
- Neither of the slash notations are acceptable, though they are added extensively. Rich Farmbrough, 13:36, 7 September 2009 (UTC).
- OK I AWB'd this one. Rich Farmbrough, 02:27, 8 September 2009 (UTC).
DASHBot
I don't know if this has been suggested before, but would it be possible to create a bot to move pages because there are lots that have hyphenated titles and should have spaced ndash titles. I have moved many manually in the past (e.g. this fun-filled morning) but it is very tedious. I was thinking this would be a simple task for a bot, and had hoped AWB might do it but I don't think it can. Is there anyone who could make or help a complete bot noob make one as there are hundreds more similar titles that need fixing. Thanks in advance, Rambo's Revenge (talk) 22:41, 5 September 2009 (UTC)
- How would the bot distinguish between what should and shouldn't be moved? Hersfold (t/a/c) 03:11, 6 September 2009 (UTC)
- It would probably have to be manually assisted, it basically would be an ideal job for AWB and run in a similar way, but AWB can't do it so I believe a bot would be required. As someone who has no experience with bots please tell me if I'm barking up the wrong tree. Rambo's Revenge (talk) 08:56, 6 September 2009 (UTC)
- Is there any case where it would be incorrect to replace the string “ - ” (space, hyphen, space) with “ – ” (space, en dash, space)? I have never come across such a case. If such a case does not exist, there would surely be no problem running such a bot without manual assistance. MTC (talk) 09:57, 6 September 2009 (UTC)
- Generally no, but I guess that, as with spelling mistakes, there will be instances where it is deliberate e.g. the hyphen article. Rambo's Revenge (talk) 10:34, 6 September 2009 (UTC)
- That’s true for article text, but the proposal was for article titles, and I don’t know of any case where a spaced hyphen would be correct in an article title. MTC (talk) 10:48, 6 September 2009 (UTC)
- Good point, I can't think of when an article title should ever be a spaced hyphen so I guess this could be fully automated. Does that make life easier for making bots then? Rambo's Revenge (talk) 10:51, 6 September 2009 (UTC)
- Yes... theoretically you could just have a continually-running bot that would repeatedly pass through Special:Allpages looking for articles with a hyphen in the title. It'd be a kinda slow way of going about it, but probably more effective than expecting someone to manually tag every one.
- Bot's run cycle:
- Start up
- DO:
- Get next list of articles from allpages (starting from ! or where it last left off)
- Look for articles with " - " in title
- Use regexen to replace " - " with "–"
- Move said article(s) to new title(s)
- Whine and complain somewhere if it can't move it due to conflicting titles
- UNTIL: it dies or is killed by the operator or emergency shutoff process
- For it to continually run, though, it'll need someone with a toolserver account, more than likely. Hersfold (t/a/c) 17:00, 6 September 2009 (UTC)
Well it's not urgent so it could run off database dumps. Rich Farmbrough, 19:54, 6 September 2009 (UTC).
- Sorry to sound completely stupid, but how exactly would one make a bot to run off either "a toolserver account" or "database dumps" and would I need to get some form of programming software to do this? Rambo's Revenge (talk) 20:19, 6 September 2009 (UTC)
- You can download a database dump from here. Look for "enwiki" and click through to find the list of files. You would want the article names file for this, it is only a few meg - and you will need to find the unpacking software ([[gz]). I have run a scan and I found 8,181 articles with " - " in the title, but I didn't rule out redirects. Yes you would need some programming competence I would say. I was looking at coding a solution for this in perl using the WP:api -it;s fairly straight-forward once you can see the wood for the trees.
- Gotchas:
- If the page is a redirect then either create another identical redirect with the new name or just skip it.
- Avoid creating double redirects.
- Don't over-write a target page.
- Gotchas:
- Rich Farmbrough, 09:30, 7 September 2009 (UTC).
- You can download a database dump from here. Look for "enwiki" and click through to find the list of files. You would want the article names file for this, it is only a few meg - and you will need to find the unpacking software ([[gz]). I have run a scan and I found 8,181 articles with " - " in the title, but I didn't rule out redirects. Yes you would need some programming competence I would say. I was looking at coding a solution for this in perl using the WP:api -it;s fairly straight-forward once you can see the wood for the trees.
This is not a very difficult task in terms of actually getting the list, the difficulty is in choosing whether or not to move the article. Either way, Coding... (X! · talk) · @988 · 22:43, 7 September 2009 (UTC)
- Here's what I've found when scanning through titles...
- What to do with names, like 'Abbas Al-Musawi?
- What to do with movie and TV titles?
- What to do with pages that are already redirects?
- What to do with pages where the target exists?
- What to do with names like .38-55_Winchester?
- (X! · talk) · @991 · 22:47, 7 September 2009 (UTC)
- Additionally, I have generated a list of all the titles with "-" in the title at [6] (warning: 807 kilobytes), and all the titles with " - " in the title at [7] (warning: 27 megabytes). (X! · talk) · @003 · 23:04, 7 September 2009 (UTC)
- Just do a wikisearch (title) in awb for " - ". Tim1357 (talk) 00:56, 8 September 2009 (UTC)
- How can the " - " dump be bigger than the "-" dump? Rich Farmbrough, 04:31, 8 September 2009 (UTC).
- As I hinted above, I would suggest not going beyond " - ", maybe \d\d\d-\d\d\d. In answer to your points 1,2,4 and 5, skip, 2 treat the same as others. Rich Farmbrough, 04:28, 8 September 2009 (UTC).
- Additionally, I have generated a list of all the titles with "-" in the title at [6] (warning: 807 kilobytes), and all the titles with " - " in the title at [7] (warning: 27 megabytes). (X! · talk) · @003 · 23:04, 7 September 2009 (UTC)
I really appreciate you guys taking this on in ways I don't know. Just to confirm (possibly going over some of what Rich says) I only request the " - "s be dealt with because these should never exist as articles (only as redirects). That solves 1 & 5 (both ignored). Movie & TV titles (2) should be dealt with the same, and redirects (3) should be skipped. I don't know much about coding but I guess this need to come first. Target pages shouldn't exist for non-redirects, because they are nearly all moved from hyphen to ndash, and most (all?) directly creating articles at the spaced ndash know to create a convenience redirect. So 4 shouldn't be a problem, but I guess skip if it happens because if it is a non-redirect hyphened page and ndahsed one there will most likely be a parallel history problem there too (can these "(4) skipped cases" be categorised if they happen?). Thanks all, Rambo's Revenge (talk) 16:27, 8 September 2009 (UTC)
- I'm worried about having a bot decide whether a page should be moved or not. For instance, (before I moved it) Gell-Mann–Nishijima formula was at Gell-Mann-Nishijima formula. Would the bot have moved it to Gell–Mann–Nishijima formula? What about Aix-la-Chappelle? Would it be moved to Aix–la–Chappelle (forget that it's currently redirecting to Aachen)? I got User:Legoktm/User:Legobot to make massive moves of physics articles several times (that is move the page and replace the hyphens with dashes in the main text), but I manually compiled the lists (User:Headbomb/Move). I wouldn't have a problem with the bot-compilation of likely candidattes for moves, but I think meat with eyes should make the call of wheter or not to actually move them. Headbomb {ταλκκοντριβς – WP Physics} 02:12, 9 September 2009 (UTC)
- No because this proposed bot is only dealing with spaced hyphens (i.e. " - " not just hyphens "-"). There would be lots of problems if automatically just doing hyphens, but this proposal is specifically addressing one of the points in WP:HYPHEN: A hyphen is never followed or preceded by a space, ..., and moving the many article titles in violation of this to spaced ndashes (the correct title) instead. Rambo's Revenge (talk) 15:30, 9 September 2009 (UTC)
Translation requests from Wikipedia:Find-A-Grave famous people
Ok, this is probably a bit of a pain to program but it would be a big help. Wikipedia:Find-A-Grave famous people is a subproject of WP:MISSING that tries to identify missing articles about notable dead people. There's been quite a lot of work on these lists to classify the individuals and in many cases articles exist in wikis in other languages. When this is the case, a link has been created to such articles. For instance, at the top of the list Wikipedia:Find-A-Grave famous people/A you find links to the es.wiki articles about Anny Ahlers and Anselmo Aieta. What I would like is to have a bot create translation requests for such entries. I estimate that there are a few hundred of these. I'll check the translation requests manually but I'd be grateful if a bot can generate them. Let me know if you need any xtra info for the task. Thanks, Pichpich (talk) 15:31, 9 September 2009 (UTC)
Journal disambiguation bot
Many citation template have parameters such as |journal=[[Science]]. These should be replaced by |journal=[[Science (journal)|Science]].
In general the logic should be
- Retrieve all wikilinked journal parameters (|journal=[[Foo]]) from citation templates (using dumps would probably be good enough for the first run)
- Check if Foo (journal) exists.
- If Foo (journal) exists and does not redirect to Foo, then change |journal=[[Foo]] → |journal=[[Foo (journal)|Foo]]. Redirects from Foo (journal) to Bar should not be touched.
- Same logic for piped links such as |journal=[[Foo|Foo A]] → |journal=[[Foo (journal)|Foo A]].
- Should also check for |journal=[[Foo]] → |journal=[[Foo (magazine)|Foo]] substitution (same kind of logic applies).
Headbomb {ταλκκοντριβς – WP Physics} 21:06, 25 August 2009 (UTC)
- Note: Don't forget that there may be whitespace. Headbomb {ταλκκοντριβς – WP Physics} 02:54, 27 August 2009 (UTC)
- These links should probably be removed as they constitute overlinking. They provide little context, are often duplicated, and users can simply use the search box if they are interested in learning more. — Dispenser 15:21, 29 August 2009 (UTC)
- A list of such journals can be found at User:Rich_Farmbrough/temp23 for the time being. Rich Farmbrough, 06:48, 3 September 2009 (UTC).
- Any bot coders up for this? Headbomb {ταλκκοντριβς – WP Physics} 16:00, 8 September 2009 (UTC)
There are 6000 odd links to Science appoximately 1% are non-dabbed journal=. To the total un-dabbed pages of the 400 journal tittles there are 25,525 links, we may expect that about 1% are journal = entries at most, so this can be safely AWB'd being unlikly to reach 200 edits. Rich Farmbrough, 17:45, 8 September 2009 (UTC).
- In hand. Rich Farmbrough, 19:09, 8 September 2009 (UTC).
- Looking like it will be 30 fixes for Science and less than a score for the rest. Rich Farmbrough, 19:34, 8 September 2009 (UTC).
- Done.It was 29 for Science, and about 8 others. Rich Farmbrough, 07:07, 10 September 2009 (UTC).
- Cool beans. Could you run it once a month or so? Headbomb {ταλκκοντριβς – WP Physics} 08:17, 10 September 2009 (UTC)
Convert "external links" that link to en.wikipedia.org/wiki to Wikilinks
Can't believe there is not a bot that does this already, but I regularly come across links formatted as full urls, i.e. http://en.wikipedia.org/wiki/Some_Article that instead should be Some Article. It seems like having a bot make this change would help us with article size issues, would correctly show red vs. blue links, and just be cleaner (and who doesn't like that?). Possible expansion to also convert urls for non-english WP pages (and other projects) to the appropriate interwikilink. Has this suggestion come up before? Thoughts, ideas, reactions? UnitedStatesian (talk) 04:37, 9 September 2009 (UTC)
- See WP:WAWI, my general fixes library commonfixes implement has an implementations for the main namespace and other for the rest which includes interwikilinking. — Dispenser 13:59, 9 September 2009 (UTC)
- Sometimes they should be URLS. So that can't really be implemented easily. Rich Farmbrough, 14:03, 9 September 2009 (UTC).
- Now I know how difficult this is, so it is not easy to programm indeed. But anything that is not in the external links sections, or in a reference (and even there, WP:RS ... ??), should simply be an internal link. If that is filtered, and an exclude-list is used for 'pages which should not be done', the error rate could be pretty low. --Dirk Beetstra T C 14:07, 9 September 2009 (UTC)
- True! In text links are good candidates. Rich Farmbrough, 10:50, 10 September 2009 (UTC).
- Now I know how difficult this is, so it is not easy to programm indeed. But anything that is not in the external links sections, or in a reference (and even there, WP:RS ... ??), should simply be an internal link. If that is filtered, and an exclude-list is used for 'pages which should not be done', the error rate could be pretty low. --Dirk Beetstra T C 14:07, 9 September 2009 (UTC)
Convert "external links" that link to en.wikipedia.org/wiki to Wikilinks
Can't believe there is not a bot that does this already, but I regularly come across links formatted as full urls, i.e. http://en.wikipedia.org/wiki/Some_Article that instead should be Some Article. It seems like having a bot make this change would help us with article size issues, would correctly show red vs. blue links, and just be cleaner (and who doesn't like that?). Possible expansion to also convert urls for non-english WP pages (and other projects) to the appropriate interwikilink. Has this suggestion come up before? Thoughts, ideas, reactions? UnitedStatesian (talk) 04:37, 9 September 2009 (UTC)
- See WP:WAWI, my general fixes library commonfixes implement has an implementations for the main namespace and other for the rest which includes interwikilinking. — Dispenser 13:59, 9 September 2009 (UTC)
- Sometimes they should be URLS. So that can't really be implemented easily. Rich Farmbrough, 14:03, 9 September 2009 (UTC).
- Now I know how difficult this is, so it is not easy to programm indeed. But anything that is not in the external links sections, or in a reference (and even there, WP:RS ... ??), should simply be an internal link. If that is filtered, and an exclude-list is used for 'pages which should not be done', the error rate could be pretty low. --Dirk Beetstra T C 14:07, 9 September 2009 (UTC)
- True! In text links are good candidates. Rich Farmbrough, 10:50, 10 September 2009 (UTC).
- Now I know how difficult this is, so it is not easy to programm indeed. But anything that is not in the external links sections, or in a reference (and even there, WP:RS ... ??), should simply be an internal link. If that is filtered, and an exclude-list is used for 'pages which should not be done', the error rate could be pretty low. --Dirk Beetstra T C 14:07, 9 September 2009 (UTC)
Convert "external links" that link to en.wikipedia.org/wiki to Wikilinks
Can't believe there is not a bot that does this already, but I regularly come across links formatted as full urls, i.e. http://en.wikipedia.org/wiki/Some_Article that instead should be Some Article. It seems like having a bot make this change would help us with article size issues, would correctly show red vs. blue links, and just be cleaner (and who doesn't like that?). Possible expansion to also convert urls for non-english WP pages (and other projects) to the appropriate interwikilink. Has this suggestion come up before? Thoughts, ideas, reactions? UnitedStatesian (talk) 04:37, 9 September 2009 (UTC)
- See WP:WAWI, my general fixes library commonfixes implement has an implementations for the main namespace and other for the rest which includes interwikilinking. — Dispenser 13:59, 9 September 2009 (UTC)
- Sometimes they should be URLS. So that can't really be implemented easily. Rich Farmbrough, 14:03, 9 September 2009 (UTC).
- Now I know how difficult this is, so it is not easy to programm indeed. But anything that is not in the external links sections, or in a reference (and even there, WP:RS ... ??), should simply be an internal link. If that is filtered, and an exclude-list is used for 'pages which should not be done', the error rate could be pretty low. --Dirk Beetstra T C 14:07, 9 September 2009 (UTC)
- True! In text links are good candidates. Rich Farmbrough, 10:50, 10 September 2009 (UTC).
- Now I know how difficult this is, so it is not easy to programm indeed. But anything that is not in the external links sections, or in a reference (and even there, WP:RS ... ??), should simply be an internal link. If that is filtered, and an exclude-list is used for 'pages which should not be done', the error rate could be pretty low. --Dirk Beetstra T C 14:07, 9 September 2009 (UTC)
The use of the {{t}} template, outside of meta, is no different than that of {{tl}}. Even on meta, it's {{t0}} instead. Given the number of pages which use {{t}} as {{tl}}, it'd be good to have a bot change over instances where the former is used to the latter.
--coldacid (talk|contrib) 18:45, 10 September 2009 (UTC)
- But {{tl}} is the correct template to use. {{t}} is just a redirect to {{tl}}, and doesn't have any advantages over it. So what would be the point? - Kingpin13 (talk) 18:47, 10 September 2009 (UTC)
- Ah, my bad, I see I got the templates the wrong way round, but it still works the other way round. Both the templates are exactly the same. - Kingpin13 (talk) 18:49, 10 September 2009 (UTC)
- What benefit does this present? See WP:R2D. –xenotalk 18:48, 10 September 2009 (UTC)
- Because of {{t}}'s sordid history, I feel it would be better to replace the use of it with the proper template, and then delete {{t}} to prevent further confusion. --coldacid (talk|contrib) 18:53, 10 September 2009 (UTC)
- I don't quite follow, but I think the best thing to do would be bring this to WP:RFD and present your case. There are less than 100 tranclusions so this won't be a problem to do but I'm still not convinced it's necessary. –xenotalk 18:57, 10 September 2009 (UTC)
- I agree, I also notice that this {{t}} template was only re-redirected by you earlier today. So I think, as Xeno suggested, an RfD is the most appropriate here. Rambo's Revenge (talk) 19:30, 10 September 2009 (UTC)
- I don't quite follow, but I think the best thing to do would be bring this to WP:RFD and present your case. There are less than 100 tranclusions so this won't be a problem to do but I'm still not convinced it's necessary. –xenotalk 18:57, 10 September 2009 (UTC)
- Because of {{t}}'s sordid history, I feel it would be better to replace the use of it with the proper template, and then delete {{t}} to prevent further confusion. --coldacid (talk|contrib) 18:53, 10 September 2009 (UTC)
Tagging Wikiproject Digimon Articles
Hi All,
Please tag all articles in Category:Redirects from Digimon's talk pages with {{WikiProject DIGI|class=redirect}} (note that most are already tagged with {{WikiProject DIGI}}). This is required as most of the talk pages in Category:WikiProject Digimon articles are for redirects, making it impossible to distinguish redirects from actual articles. This would also allow future categorisation by quality and by importance, as these redirects would not require assessment.
Regards,
G.A.Stalk 08:39, 3 September 2009 (UTC)
- I'm going to try to submit a BRFA for my newly-created bot to try to do this with AWB. MuZemike 19:40, 3 September 2009 (UTC)
- Thank you. G.A.Stalk 19:55, 3 September 2009 (UTC)
- Here are the details of my plan (merged here from the BRFA request so I can revise that request and make it more general):
- I have already gone through Category:Redirects from Digimon and separated the list into three groups: non-existent talk pages, existent talk pages with the {{WikiProject DIGI}} template already included, and existent talk pages without the {{WikiProject DIGI}} template. The goal of this function is to ensure all talk pages in this category have a {{WikiProject DIGI|class=redirect}} on them. Hence, this function will be divided into three separate tasks, all done with AWB:
- (394 talk pages) All non-existent pages will be created with
{{WikiProject DIGI|class=redirect}}
prepended on them. - (67 talk pages) All existent pages with the template already included will add the
class=redirect
using the "find and replace" method. - (651 talk pages) All existent pages with the template not included will have the
{{WikiProject DIGI|class=redirect}}
prepended on them.
- (394 talk pages) All non-existent pages will be created with
- This function, including all three tasks, will be run one time only. I also note that there are no subcategories under Category:Redirects from Digimon, and I also note that the one template that redirects to the WikiProject DIGI template, {{WikiProject Digimon}}, has no transclusions.
- I believe that it should work. Also note that two articles are already tagged {{WikiProject DIGI|class=redirect}}, see Category:WikiProject Digimon redirects. Thanks again. :) G.A.Stalk 21:03, 3 September 2009 (UTC)
- Doing... with GrooveBot AWB's being a loser however, so I'm only doing talk pages that have the banner on them ATM. GrooveDog (oh hai.) 21:16, 8 September 2009 (UTC)
- Approved on doing the nonexistent pages and those without templates, but I do have an Internet outage at the moment, so my bot is currently down until my Internet connection is restored. MuZemike 14:13, 10 September 2009 (UTC)
- Thank you: The help so far is really appreciated :-) — G.A.Stalk 14:17, 10 September 2009 (UTC)
- I believe that it should work. Also note that two articles are already tagged {{WikiProject DIGI|class=redirect}}, see Category:WikiProject Digimon redirects. Thanks again. :) G.A.Stalk 21:03, 3 September 2009 (UTC)
- Note: 1079 pages tagged currently, I'll see if I can filter out which ones are lacking the template. GrooveDog (oh hai.) 23:48, 10 September 2009 (UTC)
Link rot (back from the archive)
Ok, so i hate it when people start discussions that have already exitsted, so please tell me if there is anything like this that i can research.
I want someone to help me build a bot that fixes dead links. The bot would work like this. It would find the said dead link from Wikipedia:Dead external links and look for them in the internet archive [archive.org]. Then, it would change the link to the newly found web page. If you have not used archive.org, it works like this: take the url of any dead link. Then simple paste http://web.archive.org/web/ in front of it. Then press enter. The archive searches for the most recent back up of the page, and then it produces it. All one must do is replace the dead link in the article with the new webarchive url. Please try this yourself. Try any of the following links here Wikipedia:Dead external links/404/a and it will work. I have NO programing experience, so please help me with this. Tim1357 (talk) 03:07, 29 August 2009 (UTC)
- Let me clarify, the intenet is big, and there are holes in the archive's backup, so the theoretical bot would have to know when to skip when the archive displays "not in archive" Tim1357 (talk) 03:12, 29 August 2009 (UTC)
- I don't think that this is a good idea; dead links are a huge problem, but I don't think that a simple bot-fix is a good solution. Human intervention is required, to see if a better link can be made. Just IMHO, obv. Chzz ► 04:00, 29 August 2009 (UTC)
- I have considered doing something like this from time to time, but it is considerably more complicated than you might think. There are issues making sure the dead link isn't the result of vandalism, figuring out what the person who added the link was actually looking at, determining if there is a usable live version at a different URL, and determining which version to pick from internet archive if that is the best solution. All possible for a bot to do, IMO, but certainly not easy to program right. --ThaddeusB (talk) 05:17, 29 August 2009 (UTC)
- I'm sure it is difficult, since you would know. But, cant it look at the retrieved on date, and if there's an archive from the same date, use it? While not perfect, it might even use it if it's within a month or something. - Peregrine Fisher (talk) (contribs) 05:34, 29 August 2009 (UTC)
- Yes, if there is one, otherwise it would have to try & find the revision it was added using the history. However, the most difficult problem is determining if it can be replaced by a live link or not. A dead link is due to the page moving as often as it is due to it disappearing entirely.
- It just occurred to be that I also haven't checked archive.org's robot policy/terms of service. One would need to ensure they allow automated retrieval of content before even considering the project. --ThaddeusB (talk) 05:43, 29 August 2009 (UTC)
- ThaddeusB, I will look at their policy/post it the question on their cite to see if automated retreval of content is allowed. Im not sure what the big deal is over weather or not the page has moved or not. If it has, then it has, but the web-archive version will be the same exact thing, wont it? It actually helps because it keeps the source from changing. Tim1357 (talk) 11:09, 29 August 2009 (UTC)
- And, the dead links that are vandalism wont produce results from the internet archive, and therefore will be skipped over for humans to deal with. Tim1357 (talk) 14:39, 29 August 2009 (UTC)
- I brought this back from the archive because im not ready to let it die. I have an idea for the bot's opperation, but i do not know perl or php or any other language so im going to need some help. This is what I have thought up for the bot to do. I am pretty sure it is safe, but let me know.Tim1357 (talk) 01:31, 8 September 2009 (UTC)
- And, the dead links that are vandalism wont produce results from the internet archive, and therefore will be skipped over for humans to deal with. Tim1357 (talk) 14:39, 29 August 2009 (UTC)
- ThaddeusB, I will look at their policy/post it the question on their cite to see if automated retreval of content is allowed. Im not sure what the big deal is over weather or not the page has moved or not. If it has, then it has, but the web-archive version will be the same exact thing, wont it? It actually helps because it keeps the source from changing. Tim1357 (talk) 11:09, 29 August 2009 (UTC)
- I'm sure it is difficult, since you would know. But, cant it look at the retrieved on date, and if there's an archive from the same date, use it? While not perfect, it might even use it if it's within a month or something. - Peregrine Fisher (talk) (contribs) 05:34, 29 August 2009 (UTC)
"Go" Go to next page in list [What Transcludes this page:Template Dead Link] look for "{{Cite web [:any characters:] |url= [anycharacters one] |accessdate=[:any characters two:] {{Dead Link}}</ref>" ---we call this "refeference" if not found, "Go"--- it skips and starts over if found: search [:anycharacters Two:] for 4 numbers, starting with eitther 20, or 199 -- as the internet existed only in the 90's and 2000's Copy those 4 numbers --we'll call them year lookup web.archive.org/"year"/url.--we call this "archive" If not exist, "Go"----skips and stars over if exist search artice for "refrence"-----remember? replace with: {{Cite web [:any characters:] |url= [anycharacters one] |accessdate=[:any characters two:] |archice url:"refrence"{{Dead Link}}</ref> "Go"
I guess persistence pays off, as I will go ahead and put this on my to-do list. That doesn't mean I'll get to it soon as I have several projects I'm working on, but it does mean I will do it. :) --ThaddeusB (talk) 01:42, 8 September 2009 (UTC)
- Im sorry to be a nag. If i knew even a scrap of computer language id do it right now!. I was trying to do it with AppleScript, the only thing i even remotely know, but it is easier said then done!. Tell me if i can help at all (although i doubt i will be able to) Tim1357 (talk) 01:48, 8 September 2009 (UTC)
Just a suggestion, as I'm all for automation, but I have seen sites change purpose completly as domain names are bought and sold, among other things, and the archive is very fallible.
- Go to WP page, identify url of link. Check parameters for skip options(see below).
- Go to history - has link been changed? If so log(*) for human inspection and quit
- Store date link was added.
- Interrogate archive. If no hit add parameter "web archive=no" to dead link template
- Find a version before link was added if possible. If not record subsequent link on log(*) page for human, add "|archive link added = <date>| web archive = <date>"
- Add link as above. Set a parameters "|archive link added = <date>| web archive = <date>"
(*) Log pages
- Log the WP-page name, URL pointed to, diff where the URL was added, link to webarchive labelled suitably "No hits" or "Later hit"
- If possible add a link to a Google cache of the page. Even quote a dozen words as the cache is ephemeral - there is often another source of a document, if you have a reasonable quote it can be found more easily.
- By the same token if the dead link is associated with a quote log that too.
The idea here is
- Add the archive link when we reasonably can.
- Prevent duplication of effort.
- Help the next agent to resolve cases we can't.
- Leave a clear audit trail in case we get it wrong despite our care.
- Rich Farmbrough, 02:56, 8 September 2009 (UTC).
- I agree that the matter is more complicated than simply pulling to latest archive.org version, which is why I've delayed doing this task in the past. It certainly is important to link to the right archived version and I will definitely take all these points into consideration when designing the bot. Thanks, ThaddeusB (talk) 01:42, 10 September 2009 (UTC)
- If it matters i got permission for automated retreval. here is the email
Hi, Tim. Your inquiry got passed to me based on the assumption that when you speak of dead links, you mean to link to content in the Wayback Machine archive of web content.
Please feel free to have your automated checker/link-fixer make whatever requests are necessary. Our blanket robots.txt block is to prevent
indiscriminate crawling, and especially the case where archived content could be mistaken for original sites if indexed by search engines.
I do suggest you use an identifiable User-Agent on such requests, with a contact URL or email address, just in case your volume of requests creates a problem,
though I doubt that will be the case.
Also, please start slow -- say only one request pending at a time -- unless/until you absolutely need to go faster. Let me know if you have any other questions! - Gordon @ IA Web Archive
Tim1357 (talk) 00:38, 10 September 2009 (UTC)
- That is very useful information as well as good news. Rich Farmbrough, 10:48, 10 September 2009 (UTC).
- Perhaps only fix articles that use the Citeweb template that also include a retrieved-date, that way the bot wont have to go back through the history to find when the link was added.Tim1357 (talk) 23:05, 10 September 2009 (UTC)
subst Infobox MMAstats to Infobox martial artist
{{Infobox MMAstats}}
is now converted to call {{Infobox martial artist}}, and can be substituted. There are over 500 transclusions to deal with, please. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 17:40, 10 September 2009 (UTC)
- Done and it was actually over 1000 transclusions. Plastikspork ―Œ(talk) 18:13, 12 September 2009 (UTC)
Uncategorized
Can a bot be used to put the {{Uncategorized}} tag on pages without categories (logically). It sounds like it would be editing new pages a lot, too.-- OsirisV (talk) 20:24, 9 September 2009 (UTC)
- There has been a previous BRfA for this, see Wikipedia:Bots/Requests for approval/Coreva-Bot 2. Unfortunately it expired, as the bot-op appears to be inactive. So this sees to be an okay idea, just need a willing bot-op. - Kingpin13 (talk) 20:29, 9 September 2009 (UTC)
- I don't know if the bot still does it but Erik9bot was not so long ago approved for this job. See Wikipedia:Bots/Requests for approval/Erik9bot 8. Garion96 (talk) 21:06, 9 September 2009 (UTC)
- I believe there is a bot currently doing this - probably Erik9bot, but I'm not 100% sure offhand. I have definitely seen a bot tag such pages in recent memory though. --ThaddeusB (talk) 21:20, 9 September 2009 (UTC)
- This is easily doable. I can put in a BRFA for it. Having said that a lot of AWB instances will add an uncat tag. Rich Farmbrough, 10:56, 10 September 2009 (UTC).
- I think ThaddeusB is correct, it appears Erik9bot currently maintains Category:Articles lacking sources (Erik9bot), which holds 102,636 articles at the moment. Although the bot doesn't use the {{Uncategorized}} template. - Kingpin13 (talk) 11:00, 10 September 2009 (UTC)
- The bot is still doing it with the {{Uncategorized}} template. See here. Garion96 (talk) 16:26, 14 September 2009 (UTC)
- I think ThaddeusB is correct, it appears Erik9bot currently maintains Category:Articles lacking sources (Erik9bot), which holds 102,636 articles at the moment. Although the bot doesn't use the {{Uncategorized}} template. - Kingpin13 (talk) 11:00, 10 September 2009 (UTC)
Unify astronomical coordinates
Bot should browse through all pages that contain templates with astronomical coordinates (i.e. Template:Starbox_observe, Template:Infobox galaxy)
- 1. Check if coordinates are not in format {{RA|nn|nn|nn}}, {{DEC|nn|nn|nn}}, then parse the "ra=" and "dec=" values and fit numbers into RA and DEC templates.
- 2. Check if page doesn't contain {{Sky}} tempate, then add it right after infobox using parsed ra and dec numbers.
- 3. Check if coordiantes and other object details have proper reference (i.e. "<ref name="ned" />", example - NGC 2442), if not - add it, but data need to be verified, if it doesn't match - add tag "reference required".
- 4. Check existing {{Sky}} entries for correct minus sign "-". Just found and fixed an issue with it - see [8].
While parsing "dec" parameter please be aware of sign before first number as it can be in many forms i.e. "-", "—", "−", "& minus;", " ", "& plus;". If parser confused then mark page for manual review. There are 1000s of pages need to be browsed and fixed. Examples of pages I fixed manually - [9], [10]. Thanks. friendlystar (talk) 23:01, 13 September 2009 (UTC)
"X" and "X missing" on the same page
Something I just thought of; I don't really have the time to develop it myself, but I think it's a worthy goal. We have a few pairs of templates where one is supposed to be replaced by the other: the most prominent example is {{coord}}
and {{coord missing}}
: if coordinates are needed on a page, the template is added, and then when the coordinates are found they overwrite the needed template. But I bet there are some pages which have both templates, and there I expect the coord-needed template can be safely removed, certainly semi-auto. I had a SQL query running on the toolserver to see if I could count the situations where this is the case, but it died :( Thoughts? Happy‑melon 23:02, 10 September 2009 (UTC)
- Have you asked User:The Anome about this? Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 21:50, 16 September 2009 (UTC)
Potential photo exists bot
Would it be possible to write a bot that looked for articles without photos and first checked if they had a link to another language wiki that used a commons image; and if that wasn't the case searched by article name in commons. Then produced lists of articles with possible photos and reasons why they might match, much as per Wikipedia:WikiProject Red Link Recovery? We could then set up a project to go through add the photos or mark the suggestion as spurious, so the bot would also need to have a facility to suppress suggestions previously marked as wrong. ϢereSpielChequers 22:24, 16 September 2009 (UTC)
- would this be what your looking for? βcommand 22:25, 16 September 2009 (UTC)
This Wikiproject category is full of articles and subcategories, instead of article talk pages and category talk pages. Can someone please organise a bot to transfer the category tags across to the talk pages? Hesperian 00:00, 15 September 2009 (UTC)
- No bot needed, this should take care of it. (Will take a while for the job queue to flush 'em out though). –xenotalk 22:00, 16 September 2009 (UTC)
- Thanks. You've completely depopulated the above category (once the queue flushes). This is not a big deal, given the evidence that there was no effort involved in populating it in the first place. But I guess that project would probably still like the talk pages tagged into it. Having said that, I'm not involved in that project, and my goal in coming here is now achieved, so I'm happy to call this resolved. Hesperian 00:00, 17 September 2009 (UTC)
- The project is already categorizing its articles by quality and importance (the talk pages have the banner) - if they want also a single umbrella cat, it's a simple change to their banner (see below). But you were right, the WikiProject category was inappropriate for mainspace pages - hidden or not. –xenotalk 00:04, 17 September 2009 (UTC)
- Thanks. You've completely depopulated the above category (once the queue flushes). This is not a big deal, given the evidence that there was no effort involved in populating it in the first place. But I guess that project would probably still like the talk pages tagged into it. Having said that, I'm not involved in that project, and my goal in coming here is now achieved, so I'm happy to call this resolved. Hesperian 00:00, 17 September 2009 (UTC)
|MAIN_CAT = WikiProject Ancient Near East articles
- Of course, they could submit a request at User talk:Xenobot Mk V/requests if there is tagging work that still needs to be done. –xenotalk 00:11, 17 September 2009 (UTC)
- Oh yes, I see. The talk pages have been been dispersed to assessment categories, whereas the articles were dropped into the root category by the portal template. That makes sense. Thanks. Hesperian 00:24, 17 September 2009 (UTC)
- Of course, they could submit a request at User talk:Xenobot Mk V/requests if there is tagging work that still needs to be done. –xenotalk 00:11, 17 September 2009 (UTC)
Link rot (back from the archive)
Ok, so i hate it when people start discussions that have already exitsted, so please tell me if there is anything like this that i can research.
I want someone to help me build a bot that fixes dead links. The bot would work like this. It would find the said dead link from Wikipedia:Dead external links and look for them in the internet archive [archive.org]. Then, it would change the link to the newly found web page. If you have not used archive.org, it works like this: take the url of any dead link. Then simple paste http://web.archive.org/web/ in front of it. Then press enter. The archive searches for the most recent back up of the page, and then it produces it. All one must do is replace the dead link in the article with the new webarchive url. Please try this yourself. Try any of the following links hereWikipedia:Dead external links/404/a and it will work. I have NO programing experience, so please help me with this. Tim1357 (talk) 03:07, 29 August 2009 (UTC)
- Let me clarify, the intenet is big, and there are holes in the archive's backup, so the theoretical bot would have to know when to skip when the archive displays "not in archive" Tim1357 (talk) 03:12, 29 August 2009 (UTC)
- I don't think that this is a good idea; dead links are a huge problem, but I don't think that a simple bot-fix is a good solution. Human intervention is required, to see if a better link can be made. Just IMHO, obv. Chzz ► 04:00, 29 August 2009 (UTC)
- I have considered doing something like this from time to time, but it is considerably more complicated than you might think. There are issues making sure the dead link isn't the result of vandalism, figuring out what the person who added the link was actually looking at, determining if there is a usable live version at a different URL, and determining which version to pick from internet archive if that is the best solution. All possible for a bot to do, IMO, but certainly not easy to program right. --ThaddeusB (talk) 05:17, 29 August 2009 (UTC)
- I'm sure it is difficult, since you would know. But, cant it look at the retrieved on date, and if there's an archive from the same date, use it? While not perfect, it might even use it if it's within a month or something. -Peregrine Fisher (talk) (contribs) 05:34, 29 August 2009 (UTC)
- Yes, if there is one, otherwise it would have to try & find the revision it was added using the history. However, the most difficult problem is determining if it can be replaced by a live link or not. A dead link is due to the page moving as often as it is due to it disappearing entirely.
- It just occurred to be that I also haven't checked archive.org's robot policy/terms of service. One would need to ensure they allow automated retrieval of content before even considering the project.--ThaddeusB (talk) 05:43, 29 August 2009 (UTC)
- ThaddeusB, I will look at their policy/post it the question on their cite to see if automated retreval of content is allowed. Im not sure what the big deal is over weather or not the page has moved or not. If it has, then it has, but the web-archive version will be the same exact thing, wont it? It actually helps because it keeps the source from changing. Tim1357 (talk) 11:09, 29 August 2009 (UTC)
- And, the dead links that are vandalism wont produce results from the internet archive, and therefore will be skipped over for humans to deal with. Tim1357 (talk) 14:39, 29 August 2009 (UTC)
- I brought this back from the archive because im not ready to let it die. I have an idea for the bot's opperation, but i do not know perl or php or any other language so im going to need some help. This is what I have thought up for the bot to do. I am pretty sure it is safe, but let me know.Tim1357 (talk) 01:31, 8 September 2009 (UTC)
- And, the dead links that are vandalism wont produce results from the internet archive, and therefore will be skipped over for humans to deal with. Tim1357 (talk) 14:39, 29 August 2009 (UTC)
- ThaddeusB, I will look at their policy/post it the question on their cite to see if automated retreval of content is allowed. Im not sure what the big deal is over weather or not the page has moved or not. If it has, then it has, but the web-archive version will be the same exact thing, wont it? It actually helps because it keeps the source from changing. Tim1357 (talk) 11:09, 29 August 2009 (UTC)
- I'm sure it is difficult, since you would know. But, cant it look at the retrieved on date, and if there's an archive from the same date, use it? While not perfect, it might even use it if it's within a month or something. -Peregrine Fisher (talk) (contribs) 05:34, 29 August 2009 (UTC)
"Go" Go to next page in list [What Transcludes this page:Template Dead Link] look for "{{Cite web [:any characters:] |url= [anycharacters one] |accessdate=[:any characters two:] {{Dead Link}}</ref>" ---we call this "refeference" if not found, "Go"--- it skips and starts over if found: search [:anycharacters Two:] for 4 numbers, starting with eitther 20, or 199 -- as the internet existed only in the 90's and 2000's Copy those 4 numbers --we'll call them year lookup web.archive.org/"year"/url.--we call this "archive" If not exist, "Go"----skips and stars over if exist search artice for "refrence"-----remember? replace with: {{Cite web [:any characters:] |url= [anycharacters one] |accessdate=[:any characters two:] |archice url:"refrence"{{Dead Link}}</ref> "Go"
I guess persistence pays off, as I will go ahead and put this on my to-do list. That doesn't mean I'll get to it soon as I have several projects I'm working on, but it does mean I will do it. :) --ThaddeusB (talk) 01:42, 8 September 2009 (UTC)
- Im sorry to be a nag. If i knew even a scrap of computer language id do it right now!. I was trying to do it with AppleScript, the only thing i even remotely know, but it is easier said then done!. Tell me if i can help at all (although i doubt i will be able to) Tim1357 (talk) 01:48, 8 September 2009 (UTC)
Just a suggestion, as I'm all for automation, but I have seen sites change purpose completly as domain names are bought and sold, among other things, and the archive is very fallible.
- Go to WP page, identify url of link. Check parameters for skip options(see below).
- Go to history - has link been changed? If so log(*) for human inspection and quit
- Store date link was added.
- Interrogate archive. If no hit add parameter "web archive=no" to dead link template
- Find a version before link was added if possible. If not record subsequent link on log(*) page for human, add "|archive link added = <date>| web archive = <date>"
- Add link as above. Set a parameters "|archive link added = <date>| web archive = <date>"
(*) Log pages
- Log the WP-page name, URL pointed to, diff where the URL was added, link to webarchive labelled suitably "No hits" or "Later hit"
- If possible add a link to a Google cache of the page. Even quote a dozen words as the cache is ephemeral - there is often another source of a document, if you have a reasonable quote it can be found more easily.
- By the same token if the dead link is associated with a quote log that too.
The idea here is
- Add the archive link when we reasonably can.
- Prevent duplication of effort.
- Help the next agent to resolve cases we can't.
- Leave a clear audit trail in case we get it wrong despite our care.
- Rich Farmbrough, 02:56, 8 September 2009 (UTC).
- I agree that the matter is more complicated than simply pulling to latest archive.org version, which is why I've delayed doing this task in the past. It certainly is important to link to the right archived version and I will definitely take all these points into consideration when designing the bot. Thanks, ThaddeusB (talk) 01:42, 10 September 2009 (UTC)
- If it matters i got permission for automated retreval. here is the email
Hi, Tim. Your inquiry got passed to me based on the assumption that when you speak of dead links, you mean to link to content in the Wayback Machine archive of web content.
Please feel free to have your automated checker/link-fixer make whatever requests are necessary. Our blanket robots.txt block is to prevent
indiscriminate crawling, and especially the case where archived content could be mistaken for original sites if indexed by search engines.
I do suggest you use an identifiable User-Agent on such requests, with a contact URL or email address, just in case your volume of requests creates a problem,
though I doubt that will be the case.
Also, please start slow-- say only one request pending at a time -- unless/until you absolutely need to go faster. Let me know if you have any other questions! - Gordon @ IA Web Archive
Tim1357 (talk) 00:38, 10 September 2009 (UTC)
- That is very useful information as well as good news. Rich Farmbrough, 10:48, 10 September 2009 (UTC).
- Have you seen WikiBlame? It tells you when the link was first added. Might be useful for when there is no retrieved date. Tim1357 (talk) 03:28, 18 September 2009 (UTC)
Adding articles in one category to another
Is it possible to have all the articles in Category:Sub-Roman Britain added to Category:Sub-Roman Britain task force articles? Thanks. Dougweller (talk) 13:57, 18 September 2009 (UTC)
List of CERN experiments
Hey, I want to build a list of CERN experiments, but it would be long and tedious, since there's a lot of it. So take a look at my sandbox (and how it's written) to see what sort of end result I'd want to get.
What the bot would need to do is go through http://www.slac.stanford.edu/spires/find/experiments/www2?ee=CERN-NA-001 through http://www.slac.stanford.edu/spires/find/experiments/www2?ee=CERN-NA-63, and retrieve the relevant information. (It's possible that not all the links will produce a result, since the latest experiments are rather new, and the older ones are not all documented).
- Experiment: That's an easy one. NA1, NA2, .... NA###. Write as [[NA1 experiment|NA001]].
- Codename: Leave blank
- Spokesperson: Follow the link, and retrieve the full name. For the NA1 experiment, the SPIRES description gives "L. Foa". If you follow the link, the name is "Foa, Lorenzo". Hence write "Lorenzo Foa". This will produce errors, but I will correct them manually.
- Title: Convert to lowercase. This will produce errors, but I will correct them manually.
- Proposed/Approved/Began/Completed: Just do a straight import. I'll make the other tweaks.
- Link: Links are systematically at http://www.slac.stanford.edu/spires/find/experiments/www2?ee=CERN-NA-###, where ### is the 3 number identifier. Link as [http://www.slac.stanford.edu/spires/find/experiments/www2?ee=CERN-NA-### SPIRES]
- Website: Import from the URL section right under "Spokesperson". ?? means write &mdash, otherwise give the link as Website (but follow the link first to see if it's been redirected). If the link is dead, just place (dead) next to it.
Obviously this should be done for all the experiments found in the SPIRES database, namely EMU, IS, NA, PS, R, T, UA, and WA experiments. When all's said and done it needs to run up to EMU20 experiment, IS494 experiment (lots missing), NA63 experiment, PS215 experiment (lots missing), R808 experiment (lots missing), T250 experiment (lots missing), UA9 experiment, and WA103 experiment respectively. There'll be a shiny barnstar for whoever codes this. You can make the bot write straight in my User:Headbomb/Sandbox8/NA experiments and so on if you want. Headbomb {ταλκκοντριβς – WP Physics} 17:46, 14 September 2009 (UTC)
- Shouldn't be too hard. I'll try and do it within the next week or so, unless someone else beats me to it. Dr pda (talk) 21:47, 17 September 2009 (UTC)
- Done! I've posted the results as one big table on User talk:Headbomb/Sandbox8 as I couldn't be bothered separating out the different groups of experiments (EMU, IS etc). Let me know if there are any issues. Dr pda (talk) 11:28, 18 September 2009 (UTC)
- Wow that was fast! Thanks. I'll let you know if there's any issues, but it looks fine from the quick glance I gave it. Headbomb {ταλκκοντριβς – WP Physics} 15:39, 18 September 2009 (UTC)