It's been years since the launch of Inkbunny's first Netherlands cache and backup server, phagos - aided by a surplus from our donation drive. As is often the case, requirements change over time, but our use of leases means we don't get stuck for years with hardware that no longer meets our needs.
I replace our content servers fairly regularly behind the scenes after consulting with other staff, based on available deals. It's been a while since I talked about it here, though; so I thought I'd write about what we've done with this particular service since 2015, and the thought behind its current iteration:
* In 2018, we moved to a somewhat-cheaper yet more capable system: a HP DL380e G8 with two hexa-core Xeon E5-2420 1.9-2.4GHz with 32GB DDR3-1333 ECC RAM and a 6TB RAID 5 [4 x 2TB]
* From December 2019 'til the end of November we used a HP DL180 Gen6 with two quad-core Xeon E5620 2.4-2.66Ghz, 64GB DDR3-1333 ECC RAM and 500GB RAID 10 plus 14.5TB RAID 5 [6 x 3TB].
The third system started as a very good deal. Honestly, it was still a good deal for the raw storage and HDD spindles, but we got an energy surcharge based on its two units of rack space it at the start of this year; combined with annual price inflation at that host, it was no longer best for our needs.
We considered other machines from the same provider, but instead decided to go with a smaller dedicated SYS-LE-2 server from French provider OVH's cut-price SoYouStart brand, with: * a S1200SPL motherboard * quad-core HT Xeon E3-1230 v6 3.5-3.9Ghz [Kaby Lake-DT] [1-4 core boost @ 3.9/3.8/3.8/3.7] * 32GB DDR4-2666@2400Mhz [a CPU limit] of ECC RAM (2x16GB, dual-channel), and * 2x8TB HGST Ultrastar He10 SATA HDDs, with 256MB cache each ** These drives were partitioned in software RAID 0 as 1TB 14TB ext4 * Transfer is provided in the form of a guaranteed 250Mbps in/out @1Gbps line speed.
250Mbps bandwidth is, in theory, a limitation, but since we know this server only averages ~25-65MBps out (with ~125-175MBps peaks) and there's a limit to how fast we can realistically pull data off two HDDs, it seemed a reasonable limitation. In fact, when writing, it was able to exceed this speed, so it's clearly not a hard limit. Most importantly, our provider offers DDoS filtering, and there's no possibility of being charged because people are flooding your server with traffic - an issue with a previous host.
There was a more-expensive model with a slightly faster CPU and twice the RAM; as is often the case, the additional expense didn't justify the cost when our main bottleneck is storage. More RAM would have been nice, but turned out to be inessential. Likewise, while the drives aren't backed by a battery, they support FUA (Force Unit Access) - a feature which lets the server guarantee a write without forcing all writes 'in fly' to disk. This helps when the database replication process wants to be sure it's saved something, but the image cache is writing a temporary copy of a file from the main server too.
As this was one of our host's cheaper offers, we didn't get to choose where the machine was deployed beyond "France". It ended up in Roubaix, which was perfect as that's central to our host's network (being their headquarters), and has great links to the rest of France and Spain (including our main server in Gravelines, to the north - but not in the same datacenter for redundancy) as well as Central and Eastern Europe. Sorry, Italy, it's a bit more roundabout for you. We're guessing many won't know where Roubaix is, and it's closer to Belgium than most of the rest of France, so we named it the Belgium cache.
Like our main server, phagos uses 64-bit Debian Linux bullseye and encrypted storage for the 1TB database volume, located at the faster 'start' of the drive (which is actually the outer edge, where the speed under the head is fastest). This also holds thumbnails and other small files. Keeping these here significantly increases the attainable operations per second on a traditional HDD; it means there's far less distance for the heads to move on average. Larger files go in the second volume; and we can get 14TB on, which closely matches our main server (and is over twice what we currently need).
Both volumes use RAID 0 (striping) - this isn't great for redundancy, and isn't something we usually do, but it helps maintain the necessary performance with just two drives; both had less than six months of wear when we got them, so chances are good they'll last while we use them. Ultimately, this is a cache - all files are copies and/or have copies elsewhere. The server comes with 100GB NFS storage to backup configs.
CPU-wise, there are both frequency and architectural gains from seven years of development - it's ~112% faster than the old phagos at a common single-threaded benchmark (OpenSSL's speed test). This is borne out by average CPU usage: down to ~80% of a CPU from ~160%. In practice, what we were doing previously was to spread computation of one recommendation request over four workers with the expectation that we'd often have another request in flight, and thus be using all eight cores. This still works, since we actually have four cores now, and they barely clock down (to 3.6Ghz) if they're all used, so an individual recommendation completes in roughly half the time it used to.
With only one, far more efficient CPU (it struggles to use 32W, and is usually half of that), two HDDs rather than six, no hardware RAID card and far fewer fans, the new system uses roughly three times less power than the one it replaces overall, which doubtless plays into the lower cost. It's also ~130% faster than the current main server, CPU-wise, and has twice the RAM - both things we plan to address, soon, as we've grown enough that it's become an issue. SSD helps cover the lack of RAM, but there's a limit; it's running out of space, and it doesn't help if lots of people want to upload at once. So watch this space!
Oh yeah! I'm always doing stuff in the background, even if it's not technical. It's just often not all that interesting, unless you have a deep and abiding interest in the intricacies of hosting or moderation.
I'll be honest, we have a very small developer pool and all have been busy IRL for one reason or another. It is possible that new file format support will arrive soonish (WebP and/or WebM) due to improved support in a forthcoming version of our operating system (so we feel confident enabling it withou performance/security impact). A main server upgrade will occur first to support this and because serving increased traffic from new arrivals is the most pressing issue.
I'll be honest, we have a very small developer pool and all have been busy IRL for one reason or ano
Our network story is pretty boring other than that we have at times had to limit the size of files from certain locations - but that was more commonly due to storage limitations than bandwidth or transfer limits. We also have limited backup bandwidth, but that's improved over the last year so it's far less of a concern (which is good, because some SSL transfer compression options we use are going away).
Figuring out where the limits of the cache node serving areas vs. others is always interesting, though - lots of pings!
And all for ~$10/day across all hardware! 😸 Our network story is pretty boring other than that we h
It never ceases to amaze me how data storage and transfer has become so efficient and relatively affordable! If you can have Inkbunny running for like 300 bucks pr month, while servicing hundreds of thousands of active accounts and hosting millions of files... Well, I just think it's amazing^^
It never ceases to amaze me how data storage and transfer has become so efficient and relatively aff
There was a power-related outage in your particular region today in the early afternoon that impacted the content server and some restoration was required after that.
If you're seeing issues over a longer period of that, drop me a PM or file a support ticket and maybe we can debug it, as it is served differently and it is possible that there are some ongoing IPv4 vs. IPv6 issues (for example).
There was a power-related outage in your particular region today in the early afternoon that impacte
Not seen it happen since this post, probably cert issues and geography. Also when does IPV4 get abandoned there's like 2x as many humans as IPV4 addresses. And it's been two decades :P
Not seen it happen since this post, probably cert issues and geography. Also when does IPV4 get aban
Yeah, the one I had before never got around to it in over a decade. The reality is if you have IPv4 space already, the benefit is small, so ISPs instead see it as a cost and a source of misconfiguration.
Yeah, the one I had before never got around to it in over a decade. The reality is if you have IPv4
Meanwhile my cousin's friend in India can't get IPv4 and join IPv4 servers because they don't support IPv6 so has to use a slow proxy. There's like more humans than IPs, it sucks esp in growing countries, USA has monopolised most IPs. Most westernised furries would never cope without access to their favourite games and sites, etc. So if you wonder why there's so few furries from growing and third world countries even where net is avail, you now know why :3 Someone needs to upgrade the internet ASAP! https://worldpopulationreview.com/country-rankings/ip-a...
Meanwhile my cousin's friend in India can't get IPv4 and join IPv4 servers because they don't suppor
No, probably not. It is a sign of some problem relating to the expected name of the server not matching the security certificate being presented for it. This can occur from misconfiguration but also from interference with the initial communication with the server, either accidental or deliberate.
Yeah, by comparison the main server uses RAID5 for media files and will be moving to RAID1 soon (thanks to HDDs getting larger just slightly faster than our data has grown).
Yeah, by comparison the main server uses RAID5 for media files and will be moving to RAID1 soon (tha
I'm late but is hosting cub content in a country (which happens to be mine so I know about that law) that considers *ANY* kind of underage content illegal (including drawings of anthros, cartoon characters and "1000 year lolis") a good idea? I don't really know the legal implications of it but I can't imagine OVH would be happy to have (technically) illegal content on their servers.
I'm late but is hosting cub content in a country (which happens to be mine so I know about that law)
That would be this one, right? As with many such laws, it revolves on the definition of 'mineur' - bearing in mind that this section is titled 'offences against the human person' - and it'd be hard to both claim that this applied to non-human animals, without that decision setting a precedent affecting other non-human animals in, say, zoos. False imprisonment, anyone?
The only French case I'm aware of reported here (involving an Inkbunny user) involved lolicon. The UK goes further in an attempt to address nekomimi, yet in a decade and a half I have not heard of them going after full 'cub' characters, let alone successfully. This is probably because dedicating police time to cartoon animal characters risks bringing the justice system into disrepute.
OVH likewise has their hands full dealing with people using their servers for spam, DDoS, hacking their neighbours, etc. They are not too concerned about cartoons. We actually got cub content reported a few times (it's relatively rare, once every few years), and all that happened is I got what appears to be an automated notification requesting that I investigate the situation.
That would be this one ( https://www.legifrance.gouv.fr/codes/article_lc/LEGIARTI000027811131/2023-0
after looking more in detail to your link (and others), it doesn't explicitly state "any kind of content" as I thought it did, which is good. It does, however say, that fictional forms are included in that law, but it's probably up to them to decide if "underage-looking animal character" count, but if the 1000 year loli vampire goddess counts, I wouldn't be surprised about it. So, I guess cub is a grey area.
either way, as you said, it's unlikely the police actively looks or care for that kind of thing, and even if it was brought up to them, they'd probably not care that much. we have bigger problems than "they're drawing imaginary things i don't like!" that some people might report, lol.
after looking more in detail to your link (and others), it doesn't explicitly state "any kind of con
It gets even more interesting when you consider the safe-harbour defence provided by the e-Commerce Directive. Essentially, at least in the UK, users can be prosecuted for their actions, but IB can't in its role as a hosting provider unless it takes an active step in editing or promoting the work (probably more than, say, "Popular" does), or fails to take it down having been made aware of it being illegal. This applies to some of the more serious laws, including the one about cartoon depictions of sex involving fictional characters that appear to be children, despite one or more non-child features.
I've always wondered, how do websites like these store SO MUCH data safely, securely and with some measure of redundancy in case of problems. When I upload, is every file saved to the website server 1 to 1 in sense of file size? Or is the file compressed? I can't imagine with multiple people uploading every couple seconds a normal hard drive would last long in terms of storage space. How does that work?
I've always wondered, how do websites like these store SO MUCH data safely, securely and with some m
You can see the level of redundant storage on our Hardware section on WikiFur. Inkbunny uses about 6.5TB in total - perhaps smaller than you might think, and on our new main server we have two 12TB drives which can store a mirrored copy - plus we have backups on other servers locally and internationally. The content servers don't have to store all the content, just cache some of it.
PNG files are recompressed and we occasionally de-duplicate the archive which reduces storage usage about 5%. But it doesn't save much, it's just that if the average submission file might be 1MB and reduced-size copies come to another 1MB, we can store ~ six million of those (we have three).
We actually get about one submission every two minutes (although that comes close to one minute at peak times) and so there's adequate time to handle it. And storage technology has been growing fast enough that we don't have to use multiple or custom servers, though FA may be another matter. In the future we'll probably even be able to store it all on SSD! FA does that already which is part of why they are spending far more money than our ~$10/day.
You can see the level of redundant storage on our Hardware section on WikiFur ( https://en.wikifur.c