Jump to content

BeeGFS

From Wikipedia, the free encyclopedia
(Redirected from FhGFS)
BeeGFS
Developer(s)ThinkParQ, Fraunhofer ITWM,
Stable release
7.4.5[1] / September 2024
Repositorygithub.com/ThinkParQ/beegfs
Operating systemLinux
TypeDistributed file system
LicenseServer: proprietary, client: GPL v2
Websitebeegfs.io

BeeGFS (formerly FhGFS) is a parallel file system developed for high-performance computing. BeeGFS includes a distributed metadata architecture for scalability and flexibility reasons. It specializes in data throughput.

BeeGFS was originally developed at the Fraunhofer Center for High Performance Computing in Germany by a team led by Sven Breuner.[2] Breuner later became the CEO of ThinkParQ (2014–2018), the spin-off company that was founded in 2014 to maintain BeeGFS and offer professional services.

While the Community Edition of BeeGFS can be downloaded and used free of charge, the Enterprise Edition must be used under a professional support subscription contract.[3]

History and usage

[edit]

BeeGFS started in 2005 as an in-house development at Fraunhofer Center for HPC to replace the existing file system on the institute's new compute cluster and to be used in a production environment.

In 2007, the first beta version of the software was announced during ISC07 in Dresden, Germany and introduced to the public during SC07 in Reno, NV. One year later the first stable major release became available.

In 2014, Fraunhofer started its spin-off, the new company called ThinkParQ[4] for BeeGFS. In this process, FhGFS was renamed and became BeeGFS.[5] While ThinkParQ maintains the software and offers professional services, further feature development will continue in cooperation of ThinkParQ and Fraunhofer.

Due to the nature of BeeGFS being free of charge, it is unknown how many active installations there are. However, in 2014 there were already around 100 customers worldwide that used BeeGFS with commercial support by ThinkParQ and Fraunhofer. Among those are academic users such as universities and research facilities[6] as well as commercial companies in fields like the finance or the oil & gas industry.

Notable installations include several TOP500 computers such as the Loewe-CSC[7] cluster at the Goethe University Frankfurt, Germany (No. 22 on installation), the Vienna Scientific Cluster[8] at the University of Vienna, Austria (No. 56 on installation), and the Abel[9] cluster at the University of Oslo, Norway (No. 96 on installation).

Key concepts and features

[edit]

When developing BeeGFS, Fraunhofer aimed to create a software focused on scalability, flexibility and usability.

BeeGFS runs on any Linux machine and consists of several components that include services for clients, metadata servers and storage servers. In addition, there is a service for the management host as well as one for a graphical administration and monitoring system.

[10]

To run BeeGFS, at least one instance of the metadata server and the storage server is required. But BeeGFS allows multiple instances of each service to distribute the load from a large number of clients. The scalability of each component makes sure the system itself is scalable.

File contents are distributed over several storage servers using striping, i.e. each file is split into chunks of a given size and these chunks are distributed over the existing storage servers. The size of these chunks can be defined by the file system administrator. In addition, the metadata is distributed over several metadata servers on a directory level, with each server storing a part of the complete file system tree. This approach allows fast access to the data.

Clients, as well as metadata or storage servers, can be added into an existing system without any downtime. The client itself is a lightweight kernel module that does not require any kernel patches. The servers run on top of an existing local file system. There are no restrictions to the type of underlying file system as long as it supports POSIX; recommendations are to use ext4 for the metadata servers and XFS for the storage servers. Both servers run in userspace.

Also, there is no strict requirement for dedicated hardware for individual services. The design allows a file system administrator to start the services in any combination on a given set of machines and expand in the future. A common way among BeeGFS users to take advantage of this is by combining metadata servers and storage servers on the same machines.

BeeGFS supports various network-interconnects with dynamic failover such as Ethernet or Infiniband as well as many different Linux distributions and kernels (from 2.6.16 to the latest vanilla). The software has a simple setup and startup mechanism using init scripts. For users who prefer a graphical interface over command lines, a Java-based GUI (AdMon) is available. The GUI provides monitoring of the BeeGFS state and management of system settings. Besides managing and administrating the BeeGFS installation, this tool also offers a couple of monitoring options to help identify performance issues within the system.

BeeOND (BeeGFS on-demand)

[edit]

BeeOND (BeeGFS on-demand) allows the creation of BeeGFS file system instances on a set of nodes with one single command line. Possible use cases for the tool are manifold; a few include setting up a dedicated parallel file system for a cluster job (often referred to as burst-buffering), cloud computing or fast and easy temporary setups for testing purposes.

BeeGFS and containers

[edit]

An open-source container storage interface (CSI) driver enables BeeGFS to be used with container orchestrators like Kubernetes.[11] The driver is designed to support environments where containers running in Kubernetes and jobs running in traditional HPC workload managers need to share access to the same BeeGFS file system. The driver enables two main workflows:

  • Static provisioning allows administrators to grant containers access to existing directories in BeeGFS.
  • Dynamic provisioning allows containers to request BeeGFS storage on-demand (represented as a new directory).

Container access and visibility into the file system is restricted to the intended directory. Dynamic provisioning takes into account BeeGFS features including storage pools and striping when creating the corresponding directory in BeeGFS. General features of a POSIX file system such as the ability to specify permissions on new directories are also exposed, easing integration of global shared storage and containers. This notably simplifies tracking and limiting container consumption of the shared file system using BeeGFS quotas.[12]

Benchmarks

[edit]

The following benchmarks have been performed on Fraunhofer Seislab,[13] a test and experimental cluster at Fraunhofer ITWM with 25 nodes (20 compute plus 5 storage) and a three-tier memory: 1 TB RAM, 20 TB SSD, 120 TB HDD. Single node performance on the local file system without BeeGFS is 1,332 MB/s (write) and 1,317 MB/s (read).

The nodes are equipped with 2x Intel Xeon X5660, 48 GB RAM, 4x Intel 510 Series SSD (RAID 0), Ext4, QDR Infiniband and run Scientific Linux 6.3, Kernel 2.6.32-279 and FhGFS 2012.10-beta1.

BeeGFS and exascale

[edit]

Fraunhofer ITWM is participating in the Dynamic-Exascale Entry Platform – Extended Reach (DEEP-ER) project of the European Union,[14] which addresses the problems of the growing gap between compute speed and I/O bandwidth, and system resiliency for large-scale systems.

Some of the aspects that BeeGFS developers are working on under the scope of this project are:

  • support for tiered storage,
  • POSIX interface extensions,
  • fault tolerance and high availability (HA), and
  • improved monitoring and diagnostic tools.

The plan is to keep the POSIX interface for backward compatibility but also allow applications more control over how the file system handles things like data placement and coherency through API extensions.

See also

[edit]

References

[edit]
  1. ^ "Latest stable BeeGFS release". September 2024.
  2. ^ "FhGFS: A Fast and Scalable Parallel Filesystem | FileSystems | Columns". www.clustermonkey.net. Retrieved 2019-01-13.
  3. ^ "BeeGFS End-User License Agreement (EULA)". Fraunhofer ITWM. February 22, 2012. Retrieved March 15, 2014.
  4. ^ "ThinkParQ website". Retrieved March 17, 2014.
  5. ^ Rich Brueckner (March 13, 2014). "Fraunhofer to Spin Off Renamed BeeGFS File System". insideHPC. Retrieved March 17, 2014.
  6. ^ "FraunhoferFS High-Performance Parallel File System". ClusterVision eNews. November 2012. Archived from the original on March 17, 2014. Retrieved March 17, 2014.
  7. ^ "... And Fraunhofer". StorageNewsletter.com. June 18, 2010. Retrieved March 17, 2014.
  8. ^ "VSC-2". Top500 List. June 20, 2011. Retrieved March 17, 2014.
  9. ^ "Abel". Top500 List. June 18, 2012. Retrieved March 17, 2014.
  10. ^ "BeeGFS - The Leading Parallel Cluster File System". BeeGFS. Retrieved 2017-12-07.
  11. ^ "Drivers - Kubernetes CSI Developer Documentation".
  12. ^ "BeeGFS CSI Driver". GitHub. 11 October 2021.
  13. ^ Christian, Mohrbacher (September 24, 2015). "BeeGFS - Not only for HPC" (PDF).
  14. ^ "DEEP-ER Project Website". Retrieved March 17, 2014.