TJ CSL
  • TJ CSL
  • Services
    • Ion
      • Development
        • Overview
        • Setup
          • Docker Setup
          • Vagrant Setup
        • Environment
        • Fixtures
        • PR Workflow
        • Style Guide
        • Maintainer Workflow
        • Repository Maintenance
        • Data Generation
      • Production
      • User Experience
        • User Interface
    • Director
      • Development
        • Vagrant Setup
        • PR Workflow
        • Style Guide
        • Maintainer Workflow
      • Production
    • Workstations
    • Signage
      • Setup
      • Administration
      • Monitoring
      • Troubleshooting
      • Experimental
        • IonTap
        • SignageAdmin
    • Remote Access
      • Setup
      • Administration
    • Cluster
      • FAQ
      • Setup
        • SSH Setup
      • Administration
      • Slurm
      • Slurm Administration
      • Borg
    • Printing
      • Setup
      • Troubleshooting
    • WWW
      • Administration
      • Sites
        • Web Proxy
      • Setup
      • Troubleshooting
    • Academic Services
      • Tin
      • Othello
        • Administration
        • Setup
  • Technologies
    • Web
      • Nginx
      • Django
      • PHP-FPM
      • Node.js
      • Supervisord
    • DBs
      • PostgreSQL
      • MySQL
    • Authentication
      • Passcard
        • GPG Usage
      • SSHD
        • SSH Passwordless Login
      • FreeIPA
    • Storage
      • NFS
      • Ceph
        • Setup
        • Backups
        • CephFS
    • Operating Systems
      • Ubuntu Server
      • AlmaLinux
      • Debian
    • Tools
      • Ansible
      • Slack
      • GitBook
      • GitLab
        • Setup
        • Updating
    • Virtualization
      • QEMU/KVM
      • Libvirt
    • Advanced Computing
      • MPI
      • Tensorflow
    • Networking
      • Netbox
      • Cisco
      • Netboot
      • DNS
      • DHCP
      • NTP
      • BGP
    • Mail
      • Postfix
      • Dovecot
    • Monitoring
      • Prometheus
      • Grafana
      • Sentry
      • Uptime Robot
  • Machines
    • VM Servers
      • Utonium
      • Blossom
      • Bubbles
      • Buttercup
      • Antipodes
      • Chatham
      • Cocos
      • Galapagos
      • Gandalf
      • Gorgona
      • Overlord
      • Waverider
      • Torch
    • Ceph
      • Karel
      • Stobar
      • Wumpus
      • Waitaha
      • Barrel
      • Valdes
    • HPC Cluster
      • Zoidberg
    • Borg Cluster
    • Compute Sticks
    • Other
      • ASM
      • Duke
      • Snowy
      • Sauron
      • Sun Servers
        • Altair
        • Centauri
        • Deneb
        • Sirius
        • Vega
        • Betelgeuse
        • Ohare
    • Switches
      • Core0
      • Xnor
      • Xor
      • Imply
    • UPS
    • History
      • 2008 Sun AEG
      • 2011 Sun Upgrades
      • 2017 VM Disaster
      • 2018 Purchases
      • 2018 Cephpocalypse
    • VLANs
    • Remote Management
      • iLO
      • LOMs
    • Understudy
      • Switch Configuration
      • Server Configuration
        • Setting Up the Operating System
        • Network Configuration
        • Saruman
        • Fiordland
  • General
    • Sysadmins List
    • Organization
    • Documentation
      • Security
      • Runbooks
    • Communication
      • Terminology
    • Understudies
    • Account Structure
    • Machine Room
    • Branding
    • History
      • Fridge
      • The Brick
  • Procedures
    • Data Recovery
    • Account Provisioning
    • tjSTAR
      • Tech Support
    • Onboarding
      • New Sysadmin Onboarding
  • Guides
    • VM Creation
    • sshuttle Usage
    • Linux Wifi Setup
    • VNC Usage
    • Password Changes
    • Sun Server RAID Configuration
  • Policies
    • Data Release Policy
    • Upgrade Policy
    • Account Policy
    • Election Policy
  • Obsolete
    • Arcturus
    • Chuku
    • Cray SV1 Supercomputer
    • Ekhi
    • Mihr
    • Moloch
    • Sol
    • Rockhopper
    • Kerberos
    • LDAP
    • Agni
    • Moon
    • Apocalypse
    • AFS
      • OpenAFS
      • Setup
      • Client Setup
      • Administration
      • Troubleshooting
      • Directory Structure
      • Backups
      • Cross-Cell Authentication
    • Observium
    • OpenVPN
Powered by GitBook
On this page
  • Packages
  • Configuration
  • Sample CellServDB
  • Sample UserList
  • Automation
  1. Obsolete
  2. AFS

Setup

PreviousOpenAFSNextClient Setup

Last updated 6 years ago

This page describes how to set up AFS on a server. If you are looking for the client configuration, see:

TL;DR: use ansible for both clients and servers.

Packages

On Ubuntu, the following packages are needed for it to operate.

  • openafs-modules-dkms

  • openafs-krb5

  • openafs-fileserver

  • openafs-dbserver

  • openafs-client

Configuration

The QEMU configuration files for the OpenAFS servers have three disks attached to each server:

  • openafs<NUMBER> (attached as /dev/vda) which contains the root partition.

  • openafs<NUMBER>-vicepa (attached as /dev/vdb) which contains the vicepa partition and is mounted as /vicepa.

  • openafs<NUMBER>-vicepb (attached as /dev/vdc) which contains the vicepb partition and is mounted as /vicepb.

The configuration files CellServDB and UserList are copied to /etc/openafs/server, then the following commands are run:

  • Create the ptserver process:

    bos create <hostname> ptserver simple /usr/lib/openafs/ptserver -localauth
  • Create the vlserver process:

    bos create <hostname> vlserver simple /usr/lib/openafs/vlserver -localauth
  • Create the buserver process:

    bos create <hostname> buserver simple /usr/lib/openafs/buserver -localauth
  • Create the dafs process:

    bos create <hostname> dafs dafs /usr/lib/openafs/dafileserver /usr/lib/openafs/davolserver /usr/lib/openafs/salvageserver /usr/lib/openafs/dasalvager -localauth
  • Start the backup cron job:

    bos create <hostname> dailybackup cron -cmd "/usr/bin/vos backupsys -server <hostname> -localauth" "2:00"

Note: these commands should only be run once, when creating a new OpenAFS server.

Sample CellServDB

>csl.tjhsst.edu    #Cell name
198.38.16.19    #openafs1.csl.tjhsst.edu
198.38.16.22    #openafs2.csl.tjhsst.edu
198.38.16.23    #openafs3.csl.tjhsst.edu
198.38.16.24    #openafs4.csl.tjhsst.edu
198.38.16.25    #openafs5.csl.tjhsst.edu

Sample UserList

2019okulkarn.admin
2020fouzhins.root

Automation

The disks themselves are stored in .

Entries correspond to principals of administrators who should be able to run bos and vos.

Like it says at the beginning, is the ultimate source of authority for configuring OpenAFS. This document exists to provide a summary of the openafs-server play in case something happens to the repo.

Client Setup
Ceph
Kerberos
the ansible repo on GitLab