AFS was developed by the Carnegie Mellon University Information Technology Center (ITC). Development and current update are in the hands of Transarc Corporation. A version of AFS called Distributed File System (DFS) is a component of the Open Software Foundation (OSF) Distributed Computing Environment (DCE). Architecturally, AFS is similar to NFS (Network File System). AFS / DFS provides a way to combine different servers and client computers into a globally shared information system.
AFS is specifically designed to provide reliable file services in large distributed environments. Manageable distributed environments create a cell-based structure. A cell is a collection of file servers and client systems in a separate compartment managed by a specific authority. Typically, it represents the IT resources of an organization. Users can easily share information with other cell users. You can also share information with users in different cells, identifications of the access rights granted by the authorities in those cells.
A primary purpose of AFS / DFS is to design how users access information from anywhere so that users can collaborate and share information. The barriers separating file systems from various network operating systems have been removed.
Also Read: Analyze your Motherboard with Simple Steps
AFS/DFS offers the following functions:
- A file server process responds to file service requests from the client workstation, manages the directory structure, monitors file, and directory status information, and verifies user access.
- ABOS (Basic Monitor) server process runs on a server specified by BOS. Monitors and manages operations running other servers, and can restart server processes without human help.
- A volume server process processes file system operations related to volumes, for example. B. Create, move, replicate, back up, and restore volumes.
- Replication automatically finds replicas of information in multiple locations. Replication can take place while users remain online.
- Optimized performance is achieved by buffering frequently accessed files on local drives, ensuring that the information in the data is up to date. This helps to avoid network bottlenecks.
- Files can move around to different systems to adjust loads on servers. A VL (volume location) server process provides location transparency for volumes so that if amounts transferred, users can access them without the knowledge that they have moved.
- The relationship of the regulation to control information to information and the group. It uses encrypted login mechanisms and flexible access control lists for computers and files. The authentication system is based on Kerberos.
- Client and server machine management can be done from anywhere, so fewer systems can be managed. A system monitoring tool provides an overview of system loads and warns administrators of possible problems.
- We offer support in building scalable web servers.
- Grouping in DFS enables administrators to run processor-intensive jobs on a network, with parts of the processing task performed on different computers.
- By bundling in DFS, files can also store on a collection of smaller, less expensive computers and not on a massive server.
DFS competes with NFS from Sun Microsystems and some environments. Transarc compared the following comparison between DFS and NFS:
- When accessing files in DFS, the file name is independent of the physical location of the data. NFS file names also addressed concerning physical addresses.
- Performance: DFS uses client data caching to reduce network traffic. NFS doesn’t.
- Replication: DFS replication supports replication. NFS doesn’t.
- Availability: With DFS, files are available during system maintenance. This does not apply to NFS.
- Security: DFS security supports encrypted login and data transfer. NFS doesn’t.
- Access control: DFS supports access control lists for user and group accounts. NFS doesn’t.