Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weird disk outages around end of month #15

Open
tripleee opened this issue Dec 8, 2022 · 5 comments
Open

Weird disk outages around end of month #15

tripleee opened this issue Dec 8, 2022 · 5 comments
Assignees

Comments

@tripleee
Copy link
Member

tripleee commented Dec 8, 2022

A month ago I had to restart Halflife a number of times after the month had rolled over, and now I'm seeing the same thing again.

In brief, it seems to eat up all the disk space, and require a number of restarts before the space is properly reclaimed.

This could be a weird artifact of the Docker deployment model and/or how it works on the EC2 instance where I'm running this. Probably the deployment model should be reworked altogether.

@tripleee
Copy link
Member Author

I'm guessing the number of attacks has increased, causing the log directories (including logs inside Docker) to grow significantly. Right now it runs out of disk space every few hours.

Created a .forward to improve visibility for errors; it tries to send mail when it runs out of disk space.

Consider installing fail2ban to hopefully cut down on the attempts to log in over ssh and the various prods against the websocket.

@tripleee tripleee self-assigned this Dec 13, 2022
@tripleee
Copy link
Member Author

tripleee commented Dec 13, 2022

In the interim, compressing large logs. Probably review and remove in a few weeks.

  • /var/log/audit/audit.log.[1-4].xz
  • /var/log/*-2021*.xz
  • /var/log/20221[01]*.xz

@tripleee
Copy link
Member Author

Installed fail2ban in accordance with https://s3bubble.com/installing-fail2ban-on-ec2-ami-instance/ (horrible English and formatting errors but probably able to figure out what it's trying to say; ignoring the nmap stuff).

Added IP addresses to the ignoreip list in /etc/fail2ban/jail.local from the output of last -iad though it's only me and Double Beep who logged in recently.

@tripleee
Copy link
Member Author

As a stopgap measure, installed a cron job to run ./restart every odd hour. The restart seems to free up enough disk space to continue running for a couple of hours, by quick inspection of recent manual restarts. Hopefully I could scale this down once the situation stabilizes again.

@tripleee
Copy link
Member Author

Aggressively banning new attackers for the time being.

#!/bin/bash

ip=$(head -n 1 banned)

if ! [[ "$ip" ]]; then
    echo "$0: banned is empty -- aborting" >&2
    exit 12
fi

lastb -iadF |
awk -v latest="$ip" '$NF == latest { exit }
    { print $NF }' |
tee >(head -n 1 >banned) |
xargs -n 1 fail2ban-client set ssh-iptables banip

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

1 participant