blog.shukriadams.com

Game devops and other things

NextCloud setup

I made the leap to NextCloud, a popular open-source, self-hosted alternative to Dropbox. I've used Dropbox for years and I like them, but I've never been thrilled with their pricing model.It's annoying that my wife has to pay a full subscription for Dropbox when she needs only a few gigs more than Dropbox's free plan allows, and this is a big plus for NextCloud - it allows you to reduce costs by creating private "data co-operatives" for your family or close friends.

Stuff you'll need

Know-how

NextCloud is great and all, but there is a bit of a knowledge hurdle you'll need to get over to self-host. If you're a "cloud techie" you're probably good to go, but for most regular people who want something that just works out of a box, I'm afraid there be a few dragons in this box.

For this solution I'm using Docker, so I'm assuming you're comfortable with that, as well as configuring and troubleshooting web applications on virtual private servers.

Domain

To get the full secure experience you're going to need your own domain. It is possible to access a NextCloud instance directly via server IP, but this bypasses SSL encryption so anyone else on your network will be able to see your traffic. So, domain up. You don't need a new domain though - if you have one already, a subdomain on that will do just fine. Also, don't worry about having to buy or set up SSL certificates - LetsEncrypt does all of that automagically. What a time to be alive.

Cloudflare user?

If you're using Cloudflare remember to disable proxying on the subdomain that points to your NextCloud instance - in my case I've got Cloudflare forced HTTPS enabled and it resulted in me getting stuck an infinite redirect loop. If you're in this situation, you can still access your NextCloud container by opening a port to it with Docker and accessing the container by it's IP:PORT directly.

VPS

You're going to need a dedicated server for this, in the sense that NextCloud port 80 and will therefore be the only public web application you can use. If you want add additional web apps, you'll need to add them to this docker solution and have the Nginx container proxy to them too, which sounds like dependency hell to me.

I'm running Ubuntu 16.04 on a Nanode from my cloud platform of choice, Linode, just to see if their smallest server option can handle this load. Turns out it can - it's slow at times but it manages my two-user setup. If you're running on an older version of Ubuntu like me, you should force install a more up-to-date version of Docker, else just go with Ubuntu 18.* or better.

Storage

Finally, you're going to need a place to store your files, and this is one of the trickiest parts of this operation. NextCloud supports a variety of storage backends, but most of them involve managing disks - either in the cloud, or in a physical machine. Disks require redundancy and backing up, and that roughly triples your investment cost. I'm using S3 - you give up the flexibility of a proper filesystem, but you gain built-in reliability from the S3 provider. Once again I'm using Linode and their Amazon S3-compatible Object Storage, as I'm trying to stay off Googmazoft.

Note that your S3 files are always available to your directly using any S3 client like Cyberduck. This means that if for some reason your NextCloud server goes up in flames, you can still get at your files - that's some piece of mind.

Installing

We're going to run five Docker containers linked together into a single "solution"

  • an Nginx web server that will be exposed to the public, and will help procure security certificates
  • a LetsEncrypt container what will keep our security certificates up-to-date (these are needed to get HTTPS to work)
  • a MariaDB database container, NextCloud needs this to store its own data in. Note that your files will not be stored in here, only file metadata
  • a Redis instance to help MariaDB cope
  • Finally, the NextCloud container itself

I based my setup on this well-written guide at www.ssdnodes.com, to which I made tweaks :

  • I added Redis
  • I set container version numbers, as it's good practice to afix container versions so you can replicate a setup - containers change over time, but versions don't.
  • I define the network in the compose file because that's one less piece of manual interaction with the command line
  • data volumes are set relative to the docker-compose file instead managed by docker. We want to know where our data is being stored so we can back it up.

My entire docker-compose file follows - don't forget to change passwords etc (search for the severn instances of "# set me" if in doubt)

version: "3"
services:


    proxy:
        image: jwilder/nginx-proxy:alpine
        labels:
        # labels needed by lets encrypt to identify container to generate certs in
        - "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
        container_name: nextcloud-proxy
        networks:
        - nextcloud_network
        ports:
        - 80:80
        - 443:443
        volumes:
        - ./proxy/conf.d:/etc/nginx/conf.d:rw
        - ./proxy/vhost.d:/etc/nginx/vhost.d:rw
        - ./proxy/html:/usr/share/nginx/html:rw
        - ./proxy/certs:/etc/nginx/certs:ro
        - /etc/localtime:/etc/localtime:ro
        - /var/run/docker.sock:/tmp/docker.sock:ro
        restart: unless-stopped


    letsencrypt:
        image: jrcs/letsencrypt-nginx-proxy-companion:v1.12.1
        container_name: nextcloud-letsencrypt
        depends_on:
            - proxy
        networks:
            - nextcloud_network
        volumes:
            - ./proxy/certs:/etc/nginx/certs:rw
            - ./proxy/vhost.d:/etc/nginx/vhost.d:rw
            - ./proxy/html:/usr/share/nginx/html:rw
            - /etc/localtime:/etc/localtime:ro
            - /var/run/docker.sock:/var/run/docker.sock:ro
        restart: unless-stopped


    db:
        image: mariadb:10.5.1
        container_name: nextcloud-mariadb
        networks:
            - nextcloud_network
        volumes:
            - ./db:/var/lib/mysql
            - ./dbdumps:/var/dbdumps
            - /etc/localtime:/etc/localtime:ro
        environment:
            - MYSQL_ROOT_PASSWORD=...   # set me
            - MYSQL_PASSWORD=...        # set me
            - MYSQL_DATABASE=...        # set me
            - MYSQL_USER=...            # set me
        restart: unless-stopped


    redis:
        container_name: nextcloud-redis
        image: redis:5.0.8
        restart: unless-stopped
        networks:
            - nextcloud_network
        volumes:
        - ./redis/data:/data
        command: ["redis-server", "--appendonly yes"]               


    app:
        image: nextcloud:18.0.2
        container_name: nextcloud-app
        networks:
            - nextcloud_network
        depends_on:
            - letsencrypt
            - proxy
            - redis
            - db
        volumes:
            - ./nextcloud:/var/www/html
            - ./app/config:/var/www/html/config
            - ./app/custom_apps:/var/www/html/custom_apps
            - ./app/data:/var/www/html/data
            - ./app/themes:/var/www/html/themes
            - /etc/localtime:/etc/localtime:ro
        environment:
            - VIRTUAL_HOST=YOURDOMAINHERE           # set me
            - LETSENCRYPT_HOST=YOURDOMAINHERE       # set me
            - [email protected]     # set me
        restart: unless-stopped        


networks:


    nextcloud_network:
        driver: bridge

There's a lot of stuff in this compose file, I suggest your refer to the article I cribbed it from for an in-depth explanation if you're new to Docker. To run it, simply copy/paste the entire script above into a docker-compose.yml in the folder you want your NextCloud solution to live in and run

docker-compose up -d

Confirm that you're containers are running with

docker ps -a 

There you go - installation complete, thanks Docker!

Configuring

You should now be able to access your NextCloud container via a browser at whatever domain you set in the compose file. Select MariaDB/MySQL as the database platform, and finish the setup procedure.

If you get the error

Access through untrusted domain    Please contact your administrator.

You'll need to explicity add your domain to YOURNEXTCLOUDFOLDER/app/config/config.php

<?php
$CONFIG = array (
    ...
    'trusted_domains' =>
        array (
            0 => 'your-ip:8080',
            1 => 'yourdomain.com'
    ),
    ...
);

Your VPS's public-ip:8080 should already be there, but your domain might be missing.

Enable Redis

Even though NextCloud doesn't strictly need Redis, I found that when having to deal with real-life file loads, it generates a lot of hanging file locks which will break your server and require that you do some database query cleanup (yuck!). Enabling Redis doesn't stop locks from occuring, they're just more rare, and autoresolve so quickly you barely notice them.

In YOURNEXTCLOUDFOLDER/app/config/config.php add this to enable Redis

<?php
$CONFIG = array (
    ...
    'filelocking.enabled' => true,
    'memcache.locking' => '\OC\Memcache\Redis',
    'redis' => array(
        'host' => 'redis',
        'port' => 6379,
        'timeout' => 0.0
    ),
    ...
);

No password is defined because our Redis container is visible on only the closed NextCloud network. Restart your solution for this to take effect, and confirm that Redis is working by running this utility command

docker exec -it -u www-data nextcloud-app bash -c "php occ files:scan --all"

Which should return a folder and file count (zero if you haven't added anything). If Redis is failing, you'll get a clear error.

The Client

My NextCloud Windows client wasn't happy hitting the Nginx proxy, and got stuck in an OAuth authentication loop. To fix this, while still in config.php enable overwriteprotocol by adding the following line

<?php
$CONFIG = array (
    ...
    'overwriteprotocol' => 'https',
    ...
);

Restart your solution for this to take effect.

File sizes and nginx overrides

Nginx's default allowed file size is way too low - in real life situations you'll likely want to deal with large (multi megabyte) files which will cause this error

413 Request Entity Too Large

Override this by creating the file YOURNEXTCLOUDFOLDER/proxy/conf.d/customproxysettings.conf and to it add

client_max_body_size 100m;

or whatever size you want to it (use "g" for gigabytes). You can add other Nginx tweaks here too if desired. Restart your solution for this to take effect.

S3 storage

Set up your Linode object storage bucket, and set up an access key for it. Back in NextCloud, Once you've gotten through the setup wizard (make sure you're logged in as admin)

  • go to "apps" and enabled the "external storage" app.
  • go back to settings > external storage > add storage
  • Select "Amazon S3" and enter your Linode bucket info :
    • Hostname is "linodeobjects.com"
    • Port is 443
    • Region should look like "eu-central-1", get this from your bucket url
    • Enable SSL, enable Path Style.
    • Under extended options for each bucket, set "check for changes" to "never", this helps reduce the number of times you hit your S3 backend, which reduces the likelihood of getting "Please reduce your request rate" errors from Linode. (NextCloud unfortunately doesn't yet give us an option to throttle request rates to S3)
  • Go to the Nextcloud "Apps" page and disable the "Versions" and "Deleted Files" app. These app keeps version histories and trash of your files when you change them, unfortunately it doesn't work with external storages and will eventually flood your Nextcloud install drive and lock it up.

When I "moved in" to Nextcloud I simply cut/pasted my several-thousand file large Dropbox folder into my NextCloud folder - that's a bad idea. It still led to too many requests on my Linode bucket, and I continued getting "Please reduce your request rate" errors. So far the only way I can get around this is to move my files at a slower rate, but as it's very seldom I'll be doing this, I can live with it.

Stay tuned for

I hope to write a followup post once I cobble together a single bash script for backing up, as well as my impressions after several days of active use of NextCloud.

Conclusions

I'm still in the process of moving in and trying out NextCloud. It's an impressive product, especially when you look at the amount of work that has gone into the mobile and Windows clients. The downside is that it's not 100% matured yet - you still occasionally have to play "system administrator" and fiddle around in the database or at the command line, and this doesn't inspire total confidence. For that reason my current plan is to use NextCloud for media only (images and sound files), but I'll keep a few small critical data files in Dropbox.

Previous : Homelab
Next : Moving in