User Details
- User Since
- Apr 16 2019, 9:00 PM (292 w, 6 d)
- Availability
- Available
- LDAP User
- Wpao
- MediaWiki User
- WPao (WMF) [ Global Accounts ]
Wed, Nov 13
Ah that makes sense, thanks for the info. We'll go ahead and move the server, after the Phabricator task is created. FWIW, all servers being ordered this fiscal year and moving forward will have 10g cards...and the refresh/upgrade to 10g switches in eqiad for rows C and D is supposed to happen probably later in Q4.
The new server is already in service. The main reason brought this up is the process we had to go through to get a 10G card in wikikube-ctrl1001 cause we need the extra bandwidth. I think that to do so, we 'll need to chose a server in a rack that has free 10G ports and re-cable. I 'll file a separate task
Tue, Nov 12
Hi @akosiaris - thanks for confirming. I think we already ordered the replacement host though via T368933. You're welcome to continue using wikikube-ctrl1001 for a longer period of time though, and dedicate the new server for something else in the meantime if you want?
Wed, Nov 6
Hi @Jhancock.wm and @Papaul - just a heads up, it looks like the test controller kit arrived yesterday:
Just a heads up @Jclark-ctr & @VRiley-WMF - the test controller kit should've arrived yesterday:
Mon, Nov 4
Thu, Oct 31
Met with the Supermicro team today, who believes the RAID kit should be approved either today or tomorrow, and shipped out after that. For reference, here are some details they sent us below:
Wed, Oct 30
Meeting set with Supermicro team on October 31 at 3pm UTC, to discuss the proposed RAID controller option and address any outstanding questions that we have. @Volans, @elukey, @RobH, @Papaul, and myself are all on the invite titled "SMC/Wiki RAID Controller Discussion," but please let Richard from Supermicro know, if you need to propose a different meeting time. Thanks, Willy
Thanks so much @jcrespo, I appreciate your flexibility and patience on this.
Tue, Oct 29
Thanks for the context, Jaime. Based on your current needs and with the time constraints, it sounds like it'll be better having you continue working on the host in its current state. While we're escalating everything with Supermicro, it's been a bit difficult getting some solid ETAs in place. There's also the possibility that unexpected issues could pop up, and I don't want to potentially delay things any further.
Hi @jcrespo - thanks for your feedback on this. My apologies that these Config J servers have been causing a lot of headaches. Unfortunately, we still have to figure out how to best resolve the performance issues from the RAID controller. In your opinion, what would work best? For example, would it work better if we set up a Config J server with the upgraded RAID controller first, and then migrated the data after? Let me know your preference, and we'll do our best to workaround and accommodate that.
Mon, Oct 28
Re-opening this task, since we have the incorrect RAID controller on the server. @RobH is currently working with Supermicro on getting an upgraded RAID controller onsite to hopefully resolve the performance issues being seen. @RobH - please continue following up with Supermicro with ETAs and statuses, and post them here for visibility. Thanks, Willy
Re-opening this task, as the server has the incorrect RAID controller. We're working with Supermicro to get an upgraded RAID controller sent onsite, to replace and hopefully resolve the performance issues being seen. @RobH - can you provide frequent updates in this task and work closely with Supermicro on getting the part, until we have this issue resolved? Thanks, Willy
Oct 23 2024
Yup, agreed. If the servers can be reallocated for something else that is currently needed, I think it makes more sense to just repurpose them vs keeping them as spares or decommissioning them.
Sep 28 2024
Sure, no problem @akosiaris. I'm having trouble finding the line item though for wikikube-ctrl1001 on the procurement doc. Is it part of the "Refresh of mw[1349-1413]"?
Sep 26 2024
Thanks for providing all the details on this, @ssingh. @RobH - as we chatted about earlier today, we could ask Ascenty to double-check that there are enough perf tiles in the cold aisle, confirm that the blanket panels are in place (and if not, add them), and possibly get a temperature and humidity reading in that area. Thanks, Willy
Sep 25 2024
Thanks @dcaro. @Jclark-ctr is out the rest of this week, but should be able to ship these out when he's back next week.
Sep 23 2024
Hi @ABran-WMF - can you check with the onsite engineers @VRiley-WMF and @Jclark-ctr? Please also keep in mind this server is due to be refreshed in Q2, so a new system will be on its way in another month or so.
@Jclark-ctr & @VRiley-WMF, who can see if there are any parts available from decommissioned servers
Sep 17 2024
Sep 12 2024
@Jclark-ctr and @VRiley-WMF - can you confirm if we're ok with the Data Platform team increasing power on the hosts listed above? Thanks, Willy
It looks like it'll be 3 drives minimum from the latest email today, and @Jclark-ctr - you can find the shipping label from Dawn's email on Sept 10. @dcaro - just let us know whenever the cluster is back up and how many disks you prefer to send out. Thanks, Willy
Sep 3 2024
Thanks @dcaro, sounds good. I'll bug them again about the drive number, if we don't hear back by mid-week.
Aug 29 2024
Hi @dcaro - just following up on this to see if you were ok with shipping these WMCS drives with data on them, back to Dell for identifying the root cause? From Dell's last email a couple weeks ago, they stated that they have a NDA with Hynix, along with the NDA with Wikimedia, which should cover any security concerns. To ensure we don't lose momentum, during my call with Dell today, I asked them to provide the number of drives they need and also a shipping label on where to send them to. Let us know though if you feel comfortable with sending the disks. Thanks, Willy
Aug 12 2024
@VRiley-WMF - fyi, this one looks like it's high priority
Jul 18 2024
Thanks @elukey, that sounds good!
Jul 17 2024
Hi @ABran-WMF - can you work with the onsite engineers on this? cc'ing @VRiley-WMF & @Jclark-ctr
Jul 16 2024
Jul 12 2024
Thanks for testing this out @Papaul. Since it appears that upgrading the WMF environment to PXELINUX version 6.04 may fix this issue, who would be the best person to help us get that upgraded?
Jul 11 2024
Hi @Eevans - I'll let @Jclark-ctr and @VRiley-WMF confirm your first two questions. From some of the feedback I've received though, it seems that the issue on both hosts started occurring after the drives first failed on both hosts. Since it's a software RAID, it makes me wonder if there might be an issue on that end of things. Would it be possible to test things out in a hardware RAID setup? In the meantime, I'm going to bump up the refresh of aqs1010 to Q1, so you can try using that server as a replacement to either aqs1013 or aqs1014 (your choice) to see how it responds.
@VRiley-WMF & @Jclark-ctr - can you see if we have any spare 10g NICs from decommissioned servers for this?
@Jhancock.wm & @Papaul - can you see if we have any spare 10g NICs from decommissioned servers for this?
Jul 10 2024
Thanks for the input @cmooney. All your suggestions sound good to me, so feel free to swap out ifconfig with ip, bridge, traceroute, and lldpctl. Thanks!
Jul 3 2024
Thanks so much @elukey for putting this proposal together, and for the chat during office hours today. I like the entire idea, and will run it by the rest of the team during our staff meeting next week. For the first bullet around ssh access to all production nodes for a minimal list of read only sudo commands, I think we can just go ahead and proceed with this part. It'll be really beneficial in helping the Dc-Ops engineers troubleshoot/diagnose issues. My only ask here is to see if it's possible expand the list of read only commands to include the following: dmesg, dmidecode, smartctl, nvme, edac-util, mdadm, ledctl, free, uptime, df, top, uname, ipmi-sensors, dhcp, ping, ifconfig. And if we're able to implement this part within a couple weeks, that'll be terrific.
Hi @Eevans - since we've replaced all hardware parts on this host, and the error is still showing up, it doesn't seem like it's a hardware problem. It's also really odd that aqs1014 is also failing on the same exact drive slot. Have you looked into possible software or configuration issues with the software RAID that could be contributing to this? Also, were there any upgrades, maintenances, or any changes that happened right before the drive had first failed?
Jun 20 2024
During my call with the Dell Account team today, I asked them to push on this a bit more. The Dell Tech Support engineer hasn't been able to replicate the issue on his end, but I asked the Account team what the ramifications would be for Dell if they were to just ship us the 100 replacement disks for all 14x servers (ie: would they not be able to RMA it with the drive manufacturer, etc.). So, they're going to follow up and get back to me next week. Thanks, Willy
Jun 19 2024
Hi @Papaul - can you add the Dell Support ticket that you created in this Phabricator task, and provide any updates/progress on how that's going? Thanks, Willy
Jun 18 2024
Jun 11 2024
Cool, thanks @RobH. Adding @VRiley-WMF and @Jhancock.wm for visibility also, since I think they were working on this
Jun 10 2024
Thanks @Volans, will do on the remaining Netbox errors.
Jun 6 2024
@Papaul & @Jhancock.wm - was this one completed already via a different task?
Valerie is on vacation, so assigning to John
Ok, got it. Thanks for the info @dcaro. And just to confirm, cloudcephosd1001-1020 have the same hardware configuration (only with different drive manufacturers), and don't have any of the same issues as cloudcephosd1021-1034? Let's see what the Dell team comes back with after escalating up, and hopefully we can make some more headway there.
During my sync up call with Dell today, I asked our account team to see if they could push a bit more to get more hard drives RMA'd. The servers are still under warranty for a few more months, and they're going to escalate it up the chain a bit more, to see what they're going to do. In the meantime though, can we look into if something else might've changed when all these drives started having bad sectors? It looks we installed this batch of servers back in December 2021, then they were put in production in 2022. So it seems like they were running ok for a year, until the drive errors started popping up at the end of 2023.
Jun 5 2024
Hi @dcaro - just following up on this. Can you provide the racking information for us, to start this install?
May 29 2024
Removing the procurement project tag. We have spares from decom'd servers that we can use for this, instead of purchasing the 10g cards. @VRiley-WMF - can you work with @kamila on getting these hosts upgraded and moved to 10g switches?
May 24 2024
Thanks for the heads up @bking. I went ahead and checked Netbox, just to ensure all the servers were dispersed pretty evenly across the different racks...which they are (listed below is the rack and the quantity of servers in each rack). For reference, the bolded line items are the racks that are currently pulling a bit more on power. We could do a before and after snapshot using Grafana (https://grafana.wikimedia.org/d/f64mmDzMz/power-usage?orgId=1&from=now-30d&to=now), though I have feeling we should still be ok with the increased power.
Apr 15 2024
Since the only thing remaining in this task is bringing up the Dell switches in racks E8 and F8 (which I believe the Network SRE team is working on), I'm going to go ahead and resolve the main tracking ticket. Thanks, Willy
Apr 3 2024
Sure, no prob @LSobanski. Here's the list of the 24 active devices that still reference RT tasks in Netbox, along with their purchase dates (network equipment usually EOLs every 8yrs):
Apr 2 2024
Thanks for checking @LSobanski. It's definitely rare that we need to refer back to RT. In the last 5 years, the 2-3 cases that we've had to reference RT was typically due to tracking down information about core routers that we had purchased back then. In Netbox, we only have 24 active devices left that still reference RT tasks. As long as we're able to access these in someway (ideally quickly and easily) on the rare occasions that it's needed, you should be able to proceed with moving forward.
Mar 19 2024
Hi @elukey - do you want me to change the Lift Wing expansion requests for 16x servers in FY24-25 to 10g? Thanks, Willy
Mar 13 2024
@VRiley-WMF & @Jclark-ctr for troubleshooting the hardware. (host was installed a few quarters ago)
Mar 5 2024
Sounds good. @Jhancock.wm - I created a new sheet below, with the following fields. I entered in the hostnames and asset tag, but can you fill in the remaining items for old S/N, new S/N, and Phabricator Task?
Mar 4 2024
Thanks for confirming, @Volans. If everyone else is ok with making the correlation on the accounting spreadsheet, my vote is that we go with that route. Thanks, Willy
Mar 1 2024
Thanks @Volans, that makes sense. My preference would be to leave Netbox as is, and use the accounting spreadsheet to make the S/N connection to each other. Would we be adding a different tab on the accounting spreadsheet for that?
Feb 29 2024
If we change the serial number, I think it would create an error for S/N / Asset tag mismatch. (related to Riccardo's points earlier) We also reference the original chassis S/N when dealing with vendors for recycling servers (estimates, official documentation, etc) and purchasing replacement parts, so I'm still a bit hesitant with editing the S/N in Netbox as the solution. Since it doesn't sound like we receive any Netbox alerts when we replacing with a new motherboard, is there something that we could tweak to replicate the same thing? (ie: change the status or something of the donor server) Or worse case, just suppress these alerts somehow, until they eventually decommission?
Feb 28 2024
Hey @Volans - much appreciated for your feedback and for the suggestions. I was wondering since the physical serial number listed on the chassis doesn't change (it's only from a Puppet perspective that the serial number changes), is there anything on the Puppet side that could be modified to reflect the MB replacement? If there's something easy that could be done in Puppet to prevent the Netbox error from alerting, I kind of feel like it would be a more accurate representation.
@VRiley-WMF and @Jclark-ctr - can one of you pick up this request? We'll be repurposing one of the previously decommissioned cp servers to set up a temp server for Adam to use. Thanks, Willy
Sounds good @bking, thanks!
Hi @bking - thanks for coming up with the list. I have the following refreshes already on the CapEx doc, so you just have to fill in the missing columns for "Hardware Config", "Network Speed" and "Total Equipment Cost" (for custom configs)
Feb 27 2024
Thanks for picking this up @Jhancock.wm. @Marostegui - since this host looks like it's close to being refreshed in T355350, do you want to just wait for the refreshed server to be setup instead of fixing this one? Thanks, Willy
Feb 26 2024
Feb 23 2024
Hi @ssingh - the hardware should still be around, and we should be able to reallocate one of them for testing purposes. Can you shoot open a new Phabricator for us with all the necessary details (hostname, racking info, network setup, raid/partitioning, OS, and main poc)? Also, do you know how long Adam would need it for?
Feb 21 2024
@Jhancock.wm for visibility and in case any onsite support is needed
Feb 8 2024
Jan 10 2024
Thanks @VRiley-WMF. I have T354684 assigned over to you, so you can work with @fgiunchedi on coordinating downtime for the upgrades. Thanks, Willy
Jan 9 2024
Awesome, thanks @Jhancock.wm. Here's the codfw upgrade ticket for you to coordinate with @fgiunchedi on the downtime - T354685. Thanks, Willy
@Papaul / @Jhancock.wm and @Jclark-ctr / @VRiley-WMF - can you see if you have any spare memory onsite for Filippo? I think it's for prometheus100[5,6] and prometheus200[5,6]. (cc @RobH in case we have to order them)
Dec 15 2023
@Jclark-ctr or @VRiley-WMF - can one of you take a look at this one?
Dec 7 2023
Definitely. @Jclark-ctr & @VRiley-WMF - can you check if we have any spare drives from a decommissioned host? If not, we'll purchase one via @RobH). Thanks, Willy
Dec 1 2023
Nov 29 2023
@Jclark-ctr & @VRiley-WMF - can one of you two work on getting the drive RMA'd for this one? Thanks, Willy
Nov 23 2023
Nov 22 2023
Nov 10 2023
Thanks for working on this @bking. I'm mainly looking to see how much future growth you're looking at (a rough estimate is fine), if you have any requests for the type of servers we provide (ie: ARM, GPU, etc), or just have any feedback for us in general. We're getting pretty full at codfw, so when we purchase additional data center space, we want to ensure we're adding enough capacity for everyone's future needs over the next 3-5yrs. Thanks, Willy
Oct 30 2023
Awesome, thanks for working on this @VRiley-WMF. @nskaggs & @cmooney - since we have some discrepancies with the number of ports being used on these cloudvirts, should we come up with a plan/process to help us free up the second switchport on them? This will help us reclaim some switchports for new installs and server migrations. Thanks, Willy
Oct 25 2023
Oct 17 2023
@Jclark-ctr or @VRiley-WMF - can one of you follow up on Ben's question above on an-tool1010, along with Alex's comment on deploy1102? Thanks, Willy
Oct 3 2023
@Papaul , who's going to dig around a bit and provide some feedback
Aug 30 2023
Aug 11 2023
Aug 2 2023
It's not on the refresh list for this fiscal year; looks like it'll be due for a refresh in FY24-25. If the firmware upgrade on the iDrac doesn't work, we can try sourcing the fan if you want. (cc @RobH)