Merge remote-tracking branch 'upstream/master' into oletools
commit
cea533ae57
@ -0,0 +1,22 @@
|
|||||||
|
""" Add user.allow_spoofing
|
||||||
|
|
||||||
|
Revision ID: 7ac252f2bbbf
|
||||||
|
Revises: 8f9ea78776f4
|
||||||
|
Create Date: 2022-11-20 08:57:16.879152
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
# revision identifiers, used by Alembic.
|
||||||
|
revision = '7ac252f2bbbf'
|
||||||
|
down_revision = 'f4f0f89e0047'
|
||||||
|
|
||||||
|
from alembic import op
|
||||||
|
import sqlalchemy as sa
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade():
|
||||||
|
op.add_column('user', sa.Column('allow_spoofing', sa.Boolean(), nullable=False, server_default=sa.sql.expression.false()))
|
||||||
|
|
||||||
|
|
||||||
|
def downgrade():
|
||||||
|
op.drop_column('user', 'allow_spoofing')
|
@ -1,364 +0,0 @@
|
|||||||
# Install Mailu on a docker swarm
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
### Swarm
|
|
||||||
|
|
||||||
In order to deploy Mailu on a swarm, you will first need to initialize the swarm:
|
|
||||||
|
|
||||||
The main command will be:
|
|
||||||
```bash
|
|
||||||
docker swarm init --advertise-addr <IP_ADDR>
|
|
||||||
```
|
|
||||||
See https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/
|
|
||||||
|
|
||||||
If you want to add other managers or workers, please use:
|
|
||||||
```bash
|
|
||||||
docker swarm join --token xxxxx
|
|
||||||
```
|
|
||||||
See https://docs.docker.com/engine/swarm/join-nodes/
|
|
||||||
|
|
||||||
You have now a working swarm, and you can check its status with:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~/git/Mailu/docs/swarm/1.5 $ docker node ls
|
|
||||||
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
|
|
||||||
xhgeekkrlttpmtgmapt5hyxrb black-pearl Ready Active 18.06.0-ce
|
|
||||||
sczlqjgfhehsfdjhfhhph1nvb * coreos-01 Ready Active Leader 18.03.1-ce
|
|
||||||
mzrm9nbdggsfz4sgq6dhs5i6n flying-dutchman Ready Active 18.06.0-ce
|
|
||||||
```
|
|
||||||
|
|
||||||
### Volume definition
|
|
||||||
For data persistence (the Mailu services might be launched/relaunched on any of the swarm nodes), we need to have Mailu data stored in a manner accessible by every manager or worker in the swarm.
|
|
||||||
Hereafter we will use a NFS share:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~ $ showmount -e 192.168.0.30
|
|
||||||
Export list for 192.168.0.30:
|
|
||||||
/mnt/Pool1/pv 192.168.0.0
|
|
||||||
```
|
|
||||||
|
|
||||||
on the nfs server, I am using the following /etc/exports
|
|
||||||
```bash
|
|
||||||
$more /etc/exports
|
|
||||||
/mnt/Pool1/pv -alldirs -mapall=root -network 192.168.0.0 -mask 255.255.255.0
|
|
||||||
```
|
|
||||||
on the nfs server, I created the Mailu directory (in fact I copied a working Mailu set-up)
|
|
||||||
```bash
|
|
||||||
$mkdir /mnt/Pool1/pv/mailu
|
|
||||||
```
|
|
||||||
|
|
||||||
On your manager node, mount the nfs share to check that the share is available:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~ $ sudo mount -t nfs 192.168.0.30:/mnt/Pool1/pv/mailu /mnt/local/
|
|
||||||
```
|
|
||||||
If this is ok, you can umount it:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~ $ sudo umount /mnt/local/
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
### Networking mode
|
|
||||||
On a swarm, the services are available (default mode) through a routing mesh managed by docker itself. With this mode, each service is given a virtual IP adress and docker manages the routing between this virtual IP and the container(s) provinding this service.
|
|
||||||
With this default networking mode, I cannot get login working properly... As found in https://github.com/Mailu/Mailu/issues/375 , a workaround is to use the dnsrr networking mode at least for the front services.
|
|
||||||
|
|
||||||
The main consequence/limitation will be that the front services will *not* be available on every node, but only on the node where it will be deployed. In my case, I have only one manager and I choose to deploy the front service to the manager node, so I know on wich IP the front service will be available (aka the IP adress of my manager node).
|
|
||||||
|
|
||||||
### Variable substitution and docker-compose.yml
|
|
||||||
The docker stack deploy command doesn't support variable substitution in the .yml file itself (but we still can use .env file to pass variables to the services). As a consequence we need to adjust the docker-compose file in order to :
|
|
||||||
- remove all variables : $VERSION , $BIND_ADDRESS4 , $BIND_ADDRESS6 , $ANTIVIRUS , $WEBMAIL , etc
|
|
||||||
- change the way we define the volumes (nfs share in our case)
|
|
||||||
- add a deploy section for every service
|
|
||||||
|
|
||||||
### Docker compose
|
|
||||||
An example of docker-compose-stack.yml file is available here:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
|
|
||||||
version: '3.2'
|
|
||||||
|
|
||||||
services:
|
|
||||||
|
|
||||||
front:
|
|
||||||
image: mailu/nginx:1.5
|
|
||||||
env_file: .env
|
|
||||||
ports:
|
|
||||||
- target: 80
|
|
||||||
published: 80
|
|
||||||
mode: host
|
|
||||||
- target: 443
|
|
||||||
published: 443
|
|
||||||
mode: host
|
|
||||||
- target: 110
|
|
||||||
published: 110
|
|
||||||
mode: host
|
|
||||||
- target: 143
|
|
||||||
published: 143
|
|
||||||
mode: host
|
|
||||||
- target: 993
|
|
||||||
published: 993
|
|
||||||
mode: host
|
|
||||||
- target: 995
|
|
||||||
published: 995
|
|
||||||
mode: host
|
|
||||||
- target: 25
|
|
||||||
published: 25
|
|
||||||
mode: host
|
|
||||||
- target: 465
|
|
||||||
published: 465
|
|
||||||
mode: host
|
|
||||||
- target: 587
|
|
||||||
published: 587
|
|
||||||
mode: host
|
|
||||||
volumes:
|
|
||||||
# - "$ROOT/certs:/certs"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_certs
|
|
||||||
target: /certs
|
|
||||||
deploy:
|
|
||||||
endpoint_mode: dnsrr
|
|
||||||
replicas: 1
|
|
||||||
placement:
|
|
||||||
constraints: [node.role == manager]
|
|
||||||
|
|
||||||
redis:
|
|
||||||
image: redis:alpine
|
|
||||||
restart: always
|
|
||||||
volumes:
|
|
||||||
# - "$ROOT/redis:/data"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_redis
|
|
||||||
target: /data
|
|
||||||
deploy:
|
|
||||||
endpoint_mode: dnsrr
|
|
||||||
replicas: 1
|
|
||||||
placement:
|
|
||||||
constraints: [node.role == manager]
|
|
||||||
|
|
||||||
imap:
|
|
||||||
image: mailu/dovecot:1.5
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
volumes:
|
|
||||||
# - "$ROOT/data:/data"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_data
|
|
||||||
target: /data
|
|
||||||
# - "$ROOT/mail:/mail"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_mail
|
|
||||||
target: /mail
|
|
||||||
# - "$ROOT/overrides:/overrides"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_overrides
|
|
||||||
target: /overrides
|
|
||||||
depends_on:
|
|
||||||
- front
|
|
||||||
deploy:
|
|
||||||
endpoint_mode: dnsrr
|
|
||||||
replicas: 1
|
|
||||||
placement:
|
|
||||||
constraints: [node.role == manager]
|
|
||||||
|
|
||||||
smtp:
|
|
||||||
image: mailu/postfix:1.5
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
volumes:
|
|
||||||
# - "$ROOT/data:/data"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_data
|
|
||||||
target: /data
|
|
||||||
# - "$ROOT/overrides:/overrides"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_overrides
|
|
||||||
target: /overrides
|
|
||||||
depends_on:
|
|
||||||
- front
|
|
||||||
deploy:
|
|
||||||
endpoint_mode: dnsrr
|
|
||||||
replicas: 1
|
|
||||||
placement:
|
|
||||||
constraints: [node.role == manager]
|
|
||||||
|
|
||||||
antispam:
|
|
||||||
image: mailu/rspamd:1.5
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
depends_on:
|
|
||||||
- front
|
|
||||||
volumes:
|
|
||||||
# - "$ROOT/filter:/var/lib/rspamd"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_filter
|
|
||||||
target: /var/lib/rspamd
|
|
||||||
# - "$ROOT/dkim:/dkim"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_dkim
|
|
||||||
target: /dkim
|
|
||||||
# - "$ROOT/overrides/rspamd:/etc/rspamd/override.d"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_overrides_rspamd
|
|
||||||
target: /etc/rspamd/override.d
|
|
||||||
deploy:
|
|
||||||
endpoint_mode: dnsrr
|
|
||||||
replicas: 1
|
|
||||||
placement:
|
|
||||||
constraints: [node.role == manager]
|
|
||||||
|
|
||||||
antivirus:
|
|
||||||
image: mailu/none:1.5
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
volumes:
|
|
||||||
# - "$ROOT/filter:/data"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_filter
|
|
||||||
target: /data
|
|
||||||
deploy:
|
|
||||||
endpoint_mode: dnsrr
|
|
||||||
replicas: 1
|
|
||||||
placement:
|
|
||||||
constraints: [node.role == manager]
|
|
||||||
|
|
||||||
webdav:
|
|
||||||
image: mailu/none:1.5
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
volumes:
|
|
||||||
# - "$ROOT/dav:/data"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_dav
|
|
||||||
target: /data
|
|
||||||
deploy:
|
|
||||||
endpoint_mode: dnsrr
|
|
||||||
replicas: 1
|
|
||||||
placement:
|
|
||||||
constraints: [node.role == manager]
|
|
||||||
|
|
||||||
admin:
|
|
||||||
image: mailu/admin:1.5
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
volumes:
|
|
||||||
# - "$ROOT/data:/data"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_data
|
|
||||||
target: /data
|
|
||||||
# - "$ROOT/dkim:/dkim"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_dkim
|
|
||||||
target: /dkim
|
|
||||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
|
||||||
depends_on:
|
|
||||||
- redis
|
|
||||||
deploy:
|
|
||||||
endpoint_mode: dnsrr
|
|
||||||
replicas: 1
|
|
||||||
placement:
|
|
||||||
constraints: [node.role == manager]
|
|
||||||
|
|
||||||
webmail:
|
|
||||||
image: "mailu/roundcube:1.5"
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
volumes:
|
|
||||||
# - "$ROOT/webmail:/data"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_data
|
|
||||||
target: /data
|
|
||||||
depends_on:
|
|
||||||
- imap
|
|
||||||
deploy:
|
|
||||||
endpoint_mode: dnsrr
|
|
||||||
replicas: 1
|
|
||||||
placement:
|
|
||||||
constraints: [node.role == manager]
|
|
||||||
|
|
||||||
fetchmail:
|
|
||||||
image: mailu/fetchmail:1.5
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
volumes:
|
|
||||||
# - "$ROOT/data:/data"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_data
|
|
||||||
target: /data
|
|
||||||
logging:
|
|
||||||
driver: none
|
|
||||||
deploy:
|
|
||||||
endpoint_mode: dnsrr
|
|
||||||
replicas: 1
|
|
||||||
placement:
|
|
||||||
constraints: [node.role == manager]
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
mailu_filter:
|
|
||||||
driver_opts:
|
|
||||||
type: "nfs"
|
|
||||||
o: "addr=192.168.0.30,nolock,soft,rw"
|
|
||||||
device: ":/mnt/Pool1/pv/mailu/filter"
|
|
||||||
mailu_dkim:
|
|
||||||
driver_opts:
|
|
||||||
type: "nfs"
|
|
||||||
o: "addr=192.168.0.30,nolock,soft,rw"
|
|
||||||
device: ":/mnt/Pool1/pv/mailu/dkim"
|
|
||||||
mailu_overrides_rspamd:
|
|
||||||
driver_opts:
|
|
||||||
type: "nfs"
|
|
||||||
o: "addr=192.168.0.30,nolock,soft,rw"
|
|
||||||
device: ":/mnt/Pool1/pv/mailu/overrides/rspamd"
|
|
||||||
mailu_data:
|
|
||||||
driver_opts:
|
|
||||||
type: "nfs"
|
|
||||||
o: "addr=192.168.0.30,nolock,soft,rw"
|
|
||||||
device: ":/mnt/Pool1/pv/mailu/data"
|
|
||||||
mailu_mail:
|
|
||||||
driver_opts:
|
|
||||||
type: "nfs"
|
|
||||||
o: "addr=192.168.0.30,nolock,soft,rw"
|
|
||||||
device: ":/mnt/Pool1/pv/mailu/mail"
|
|
||||||
mailu_overrides:
|
|
||||||
driver_opts:
|
|
||||||
type: "nfs"
|
|
||||||
o: "addr=192.168.0.30,nolock,soft,rw"
|
|
||||||
device: ":/mnt/Pool1/pv/mailu/overrides"
|
|
||||||
mailu_dav:
|
|
||||||
driver_opts:
|
|
||||||
type: "nfs"
|
|
||||||
o: "addr=192.168.0.30,nolock,soft,rw"
|
|
||||||
device: ":/mnt/Pool1/pv/mailu/dav"
|
|
||||||
mailu_certs:
|
|
||||||
driver_opts:
|
|
||||||
type: "nfs"
|
|
||||||
o: "addr=192.168.0.30,nolock,soft,rw"
|
|
||||||
device: ":/mnt/Pool1/pv/mailu/certs"
|
|
||||||
mailu_redis:
|
|
||||||
driver_opts:
|
|
||||||
type: "nfs"
|
|
||||||
o: "addr=192.168.0.30,nolock,soft,rw"
|
|
||||||
device: ":/mnt/Pool1/pv/mailu/redis"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Deploy Mailu on the docker swarm
|
|
||||||
Run the following command:
|
|
||||||
```bash
|
|
||||||
docker stack deploy -c docker-compose-stack.yml mailu
|
|
||||||
```
|
|
||||||
See how the services are being deployed:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~ $ docker service ls
|
|
||||||
ID NAME MODE REPLICAS IMAGE PORTS
|
|
||||||
ywnsetmtkb1l mailu_antivirus replicated 1/1 mailu/none:1.5
|
|
||||||
pqokiaz0q128 mailu_fetchmail replicated 1/1 mailu/fetchmail:1.5
|
|
||||||
```
|
|
||||||
check a specific service:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~ $ docker service ps mailu_fetchmail
|
|
||||||
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
|
|
||||||
tbu8ppgsdffj mailu_fetchmail.1 mailu/fetchmail:1.5 coreos-01 Running Running 11 days ago
|
|
||||||
```
|
|
||||||
|
|
||||||
### Remove the stack
|
|
||||||
Run the following command:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~ $ docker stack rm mailu
|
|
||||||
```
|
|
@ -1,337 +0,0 @@
|
|||||||
# Install Mailu on a docker swarm
|
|
||||||
|
|
||||||
## Some warnings
|
|
||||||
|
|
||||||
### How Docker swarm works
|
|
||||||
|
|
||||||
Docker swarm enables replication and fail-over scenarios. As a feature, if a node dies or goes away, Docker will re-schedule it's containers on the remaining nodes.
|
|
||||||
In order to take this decisions, docker swarm works on a consensus between managers regarding the state of nodes. Therefore it recommends to always have an uneven amount of manager nodes. This will always give a majority on either halve of a potential network split.
|
|
||||||
|
|
||||||
### Storage
|
|
||||||
|
|
||||||
On top of this some of Mailu's containers heavily rely on disk storage. As noted below, every host will need the same dataset on every host where related containers are run. So Dovecot IMAP needs `/mailu/mail` replicated to every node it *may* be scheduled to run. There are various solutions for this like NFS and GlusterFS.
|
|
||||||
|
|
||||||
### When disaster strikes
|
|
||||||
|
|
||||||
So imagine 3 swarm nodes and 3 GlusterFS endpoints:
|
|
||||||
|
|
||||||
```
|
|
||||||
node-A -> gluster-A --|
|
|
||||||
node-B -> gluster-B --|--> Single file system
|
|
||||||
node-C -> gluster-C --|
|
|
||||||
```
|
|
||||||
|
|
||||||
Each node has a connection to the shared file system and maintains connections between the other nodes. Let's say Dovecot is running on `node-A`. Now a network error / outage occurs on the route between `node-A` and the remaining nodes, but stays connected to the `gluster-A` endpoint. `node-B` and `node-C` conclude that `node-A` is down. They reschedule Dovecot to start on either one of them. Dovecot starts reading and writing its indexes to the **shared** filesystem. However, it is possible the Dovecot on `node-A` is still up and handling some client requests. I've seen cases where this situations resulted in:
|
|
||||||
|
|
||||||
- Retained locks
|
|
||||||
- Corrupted indexes
|
|
||||||
- Users no longer able to read any of mail
|
|
||||||
- Lost mail
|
|
||||||
|
|
||||||
### It gets funkier
|
|
||||||
|
|
||||||
Our original deployment also included `main.db` on the GlusterFS. Due to the above we corrupted it once and we decided to move it to local storage and restirct the `admin` container to that host only. This inspired us to put some legwork is supporting different database back-ends like MySQL and PostgreSQL. We highly recommend to use either of them, in favor of sqlite.
|
|
||||||
|
|
||||||
### Conclusion
|
|
||||||
|
|
||||||
Although the above situation is less-likely to occur on a stable (local) network, it does indicate a failure case where there is a probability of data-loss or downtime. It may help to create redundant networks, but the effort might be too much for the actual results. We will need to look into better and safer methods of replicating mail data. For now, we regret to have to inform you that Docker swarm deployment is **unstable** and should be avoided in production environments.
|
|
||||||
|
|
||||||
-- @muhlemmer, 17th of January 2019.
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
### Swarm
|
|
||||||
|
|
||||||
In order to deploy Mailu on a swarm, you will first need to initialize the swarm:
|
|
||||||
|
|
||||||
The main command will be:
|
|
||||||
```bash
|
|
||||||
docker swarm init --advertise-addr <IP_ADDR>
|
|
||||||
```
|
|
||||||
See https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/
|
|
||||||
|
|
||||||
If you want to add other managers or workers, please use:
|
|
||||||
```bash
|
|
||||||
docker swarm join --token xxxxx
|
|
||||||
```
|
|
||||||
See https://docs.docker.com/engine/swarm/join-nodes/
|
|
||||||
|
|
||||||
You have now a working swarm, and you can check its status with:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~/git/Mailu/docs/swarm/1.5 $ docker node ls
|
|
||||||
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
|
|
||||||
xhgeekkrlttpmtgmapt5hyxrb black-pearl Ready Active 18.06.0-ce
|
|
||||||
sczlqjgfhehsfdjhfhhph1nvb * coreos-01 Ready Active Leader 18.03.1-ce
|
|
||||||
mzrm9nbdggsfz4sgq6dhs5i6n flying-dutchman Ready Active 18.06.0-ce
|
|
||||||
```
|
|
||||||
|
|
||||||
### Volume definition
|
|
||||||
For data persistence (the Mailu services might be launched/relaunched on any of the swarm nodes), we need to have Mailu data stored in a manner accessible by every manager or worker in the swarm.
|
|
||||||
|
|
||||||
Hereafter we will assume that "Mailu Data" is available on every node at "$ROOT" (GlusterFS and nfs shares have been successfully used).
|
|
||||||
|
|
||||||
On this example, we are using:
|
|
||||||
- the mesh routing mode (default mode). With this mode, each service is given a virtual IP adress and docker manages the routing between this virtual IP and the container(s) providing this service.
|
|
||||||
- the default ingress mode.
|
|
||||||
|
|
||||||
### Allow authentification with the mesh routing
|
|
||||||
In order to allow every (front & webmail) container to access the other services, we will use the variable SUBNET.
|
|
||||||
|
|
||||||
Let's create the mailu_default network:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~ $ docker network create -d overlay --attachable mailu_default
|
|
||||||
core@coreos-01 ~ $ docker network inspect mailu_default | grep Subnet
|
|
||||||
"Subnet": "10.0.1.0/24",
|
|
||||||
```
|
|
||||||
In the docker-compose.yml file, we will then use SUBNET = 10.0.1.0/24
|
|
||||||
In fact, imap & smtp logs doesn't show the IPs from the front(s) container(s), but the IP of "mailu_default-endpoint". So it is sufficient to set SUBNET to this specific ip (which can be found by inspecting mailu_default network). The issue is that this endpoint is created while the stack is created, I did'nt figure a way to determine this IP before the stack creation...
|
|
||||||
|
|
||||||
### Limitation with the ingress mode
|
|
||||||
With the default ingress mode, the front(s) container(s) will see origin IP(s) all being 10.255.0.x (which is the ingress-endpoint, can be found by inspecting the ingress network)
|
|
||||||
|
|
||||||
This issue is known and discussed here:
|
|
||||||
|
|
||||||
https://github.com/moby/moby/issues/25526
|
|
||||||
|
|
||||||
A workaround (using network host mode and global deployment) is discussed here:
|
|
||||||
|
|
||||||
https://github.com/moby/moby/issues/25526#issuecomment-336363408
|
|
||||||
|
|
||||||
### Don't create an open relay !
|
|
||||||
As a side effect of this ingress mode "feature", make sure that the ingress subnet is not in your RELAYHOST, otherwise you would create an smtp open relay :-(
|
|
||||||
|
|
||||||
### Ratelimits
|
|
||||||
|
|
||||||
When using ingress mode you probably want to disable rate limits, because all requests originate from the same ip address. Otherwise automatic login attempts can easily DoS the legitimate users.
|
|
||||||
|
|
||||||
## Scalability
|
|
||||||
- smtp and imap are scalable
|
|
||||||
- front and webmail are scalable (pending SUBNET is used), although the let's encrypt magic might not like it (race condidtion ? or risk to be banned by let's encrypt server if too many front containers attemps to renew the certs at the same time)
|
|
||||||
- redis, antispam, antivirus, fetchmail, admin, webdav have not been tested (hence replicas=1 in the following docker-compose.yml file)
|
|
||||||
|
|
||||||
## Docker secrets
|
|
||||||
There are DB_PW_FILE and SECRET_KEY_FILE environment variables available to specify files for these variables. These can be used to configure Docker secrets instead of writing the values directly into the `docker-compose.yml` or `mailu.env`.
|
|
||||||
|
|
||||||
## Variable substitution and docker-compose.yml
|
|
||||||
The docker stack deploy command doesn't support variable substitution in the .yml file itself.
|
|
||||||
As a consequence, we cannot simply use ``` docker stack deploy -c docker.compose.yml mailu ```
|
|
||||||
Instead, we will use the following work-around:
|
|
||||||
``` echo "$(docker-compose -f /mnt/docker/apps/mailu/docker-compose.yml config 2>/dev/null)" | docker stack deploy -c- mailu ```
|
|
||||||
|
|
||||||
We need also to:
|
|
||||||
- add a deploy section for every service
|
|
||||||
- modify the way the ports are defined for the front service
|
|
||||||
- add the SUBNET definition for admin (for imap), smtp and antispam services
|
|
||||||
|
|
||||||
## Docker compose
|
|
||||||
An example of docker-compose-stack.yml file is available here:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
|
|
||||||
version: '3.2'
|
|
||||||
|
|
||||||
services:
|
|
||||||
|
|
||||||
front:
|
|
||||||
image: mailu/nginx:$VERSION
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
ports:
|
|
||||||
- target: 80
|
|
||||||
published: 80
|
|
||||||
- target: 443
|
|
||||||
published: 443
|
|
||||||
- target: 110
|
|
||||||
published: 110
|
|
||||||
- target: 143
|
|
||||||
published: 143
|
|
||||||
- target: 993
|
|
||||||
published: 993
|
|
||||||
- target: 995
|
|
||||||
published: 995
|
|
||||||
- target: 25
|
|
||||||
published: 25
|
|
||||||
- target: 465
|
|
||||||
published: 465
|
|
||||||
- target: 587
|
|
||||||
published: 587
|
|
||||||
volumes:
|
|
||||||
- "$ROOT/certs:/certs"
|
|
||||||
deploy:
|
|
||||||
replicas: 2
|
|
||||||
|
|
||||||
redis:
|
|
||||||
image: redis:alpine
|
|
||||||
restart: always
|
|
||||||
volumes:
|
|
||||||
- "$ROOT/redis:/data"
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
|
|
||||||
imap:
|
|
||||||
image: mailu/dovecot:$VERSION
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
volumes:
|
|
||||||
- "$ROOT/mail:/mail"
|
|
||||||
- "$ROOT/overrides:/overrides"
|
|
||||||
depends_on:
|
|
||||||
- front
|
|
||||||
deploy:
|
|
||||||
replicas: 2
|
|
||||||
|
|
||||||
smtp:
|
|
||||||
image: mailu/postfix:$VERSION
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
environment:
|
|
||||||
- SUBNET=10.0.1.0/24
|
|
||||||
volumes:
|
|
||||||
- "$ROOT/overrides:/overrides"
|
|
||||||
depends_on:
|
|
||||||
- front
|
|
||||||
deploy:
|
|
||||||
replicas: 2
|
|
||||||
|
|
||||||
antispam:
|
|
||||||
image: mailu/rspamd:$VERSION
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
environment:
|
|
||||||
- SUBNET=10.0.1.0/24
|
|
||||||
volumes:
|
|
||||||
- "$ROOT/filter:/var/lib/rspamd"
|
|
||||||
- "$ROOT/dkim:/dkim"
|
|
||||||
- "$ROOT/overrides/rspamd:/etc/rspamd/override.d"
|
|
||||||
depends_on:
|
|
||||||
- front
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
|
|
||||||
antivirus:
|
|
||||||
image: mailu/none:$VERSION
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
volumes:
|
|
||||||
- "$ROOT/filter:/data"
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
|
|
||||||
webdav:
|
|
||||||
image: mailu/none:$VERSION
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
volumes:
|
|
||||||
- "$ROOT/dav:/data"
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
|
|
||||||
admin:
|
|
||||||
image: mailu/admin:$VERSION
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
environment:
|
|
||||||
- SUBNET=10.0.1.0/24
|
|
||||||
volumes:
|
|
||||||
- "$ROOT/data:/data"
|
|
||||||
- "$ROOT/dkim:/dkim"
|
|
||||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
|
||||||
depends_on:
|
|
||||||
- redis
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
|
|
||||||
webmail:
|
|
||||||
image: mailu/roundcube:$VERSION
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
volumes:
|
|
||||||
- "$ROOT/webmail:/data"
|
|
||||||
depends_on:
|
|
||||||
- imap
|
|
||||||
deploy:
|
|
||||||
replicas: 2
|
|
||||||
|
|
||||||
fetchmail:
|
|
||||||
image: mailu/fetchmail:$VERSION
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
volumes:
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
|
|
||||||
networks:
|
|
||||||
default:
|
|
||||||
external:
|
|
||||||
name: mailu_default
|
|
||||||
```
|
|
||||||
|
|
||||||
## Deploy Mailu on the docker swarm
|
|
||||||
Run the following command:
|
|
||||||
```bash
|
|
||||||
echo "$(docker-compose -f /mnt/docker/apps/mailu/docker-compose.yml config 2>/dev/null)" | docker stack deploy -c- mailu
|
|
||||||
```
|
|
||||||
See how the services are being deployed:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~ $ docker service ls
|
|
||||||
ID NAME MODE REPLICAS IMAGE PORTS
|
|
||||||
ywnsetmtkb1l mailu_antivirus replicated 1/1 mailu/none:master
|
|
||||||
pqokiaz0q128 mailu_fetchmail replicated 1/1 mailu/fetchmail:master
|
|
||||||
```
|
|
||||||
check a specific service:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~ $ docker service ps mailu_fetchmail
|
|
||||||
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
|
|
||||||
tbu8ppgsdffj mailu_fetchmail.1 mailu/fetchmail:master coreos-01 Running Running 11 days ago
|
|
||||||
```
|
|
||||||
You might also have a look on the logs:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~ $ docker service logs -f mailu_fetchmail
|
|
||||||
```
|
|
||||||
|
|
||||||
## Remove the stack
|
|
||||||
Run the following command:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~ $ docker stack rm mailu
|
|
||||||
```
|
|
||||||
|
|
||||||
## Notes on unbound resolver
|
|
||||||
|
|
||||||
In Docker compose flavor we currently have the option to include the unbound DNS resolver. This does not work in Docker Swarm, as it in not possible to configure any static IP addresses. There is an [open issue](https://github.com/moby/moby/issues/24170) for this at Docker. However, this doesn't seem to move anywhere since some time now. For that reasons we've chosen not to include the unbound resolver in the stack flavor.
|
|
||||||
|
|
||||||
If you still want to benefit from Unbound as a system resolver, you can install it system-wide. The following procedure was done on a Fedora 28 system and might needs some adjustments for your system. Note that this will need to be done on every swarm node. In this example we will make use of `dnssec-trigger`, which is used to configure unbound. When installing this and running the service, unbound is pulled in as dependency and does not need to be installed, configured or run separately.
|
|
||||||
|
|
||||||
Install required packages(unbound will be installed as dependency):
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo dnf install dnssec-trigger
|
|
||||||
```
|
|
||||||
|
|
||||||
Enable and start the *dnssec-trigger* daemon:
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo systemctl enable --now dnssec-triggerd.service
|
|
||||||
```
|
|
||||||
|
|
||||||
Configure NetworkManager to use unbound, create the file `/etc/NetworkManager/conf.d/unbound.conf` with contents:
|
|
||||||
|
|
||||||
```
|
|
||||||
[main]
|
|
||||||
dns=unbound
|
|
||||||
```
|
|
||||||
|
|
||||||
You might need to restart NetworkManager for the changes to take effect:
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo systemctl restart NetworkManager
|
|
||||||
```
|
|
||||||
|
|
||||||
Verify `resolv.conf`:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ cat /etc/resolv.conf
|
|
||||||
# Generated by dnssec-trigger-script
|
|
||||||
nameserver 127.0.0.1
|
|
||||||
```
|
|
||||||
|
|
||||||
Most of this info was take from this [Fedora Project page](https://fedoraproject.org/wiki/Changes/Default_Local_DNS_Resolver#How_To_Test).
|
|
@ -1,357 +0,0 @@
|
|||||||
# Install Mailu on a docker swarm
|
|
||||||
|
|
||||||
## Prequisites
|
|
||||||
|
|
||||||
### Swarm
|
|
||||||
|
|
||||||
In order to deploy Mailu on a swarm, you will first need to initialize the swarm:
|
|
||||||
|
|
||||||
The main command will be:
|
|
||||||
```bash
|
|
||||||
docker swarm init --advertise-addr <IP_ADDR>
|
|
||||||
```
|
|
||||||
See https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/
|
|
||||||
|
|
||||||
If you want to add other managers or workers, please use:
|
|
||||||
```bash
|
|
||||||
docker swarm join --token xxxxx
|
|
||||||
```
|
|
||||||
See https://docs.docker.com/engine/swarm/join-nodes/
|
|
||||||
|
|
||||||
You have now a working swarm, and you can check its status with:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~/git/Mailu/docs/swarm/1.5 $ docker node ls
|
|
||||||
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
|
|
||||||
xhgeekkrlttpmtgmapt5hyxrb black-pearl Ready Active 18.06.0-ce
|
|
||||||
sczlqjgfhehsfdjhfhhph1nvb * coreos-01 Ready Active Leader 18.03.1-ce
|
|
||||||
mzrm9nbdggsfz4sgq6dhs5i6n flying-dutchman Ready Active 18.06.0-ce
|
|
||||||
```
|
|
||||||
|
|
||||||
### Volume definition
|
|
||||||
For data persistence (the Mailu services might be launched/relaunched on any of the swarm nodes), we need to have Mailu data stored in a manner accessible by every manager or worker in the swarm.
|
|
||||||
Hereafter we will use a NFS share:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~ $ showmount -e 192.168.0.30
|
|
||||||
Export list for 192.168.0.30:
|
|
||||||
/mnt/Pool1/pv 192.168.0.0
|
|
||||||
```
|
|
||||||
|
|
||||||
on the nfs server, I am using the following /etc/exports
|
|
||||||
```bash
|
|
||||||
$more /etc/exports
|
|
||||||
/mnt/Pool1/pv -alldirs -mapall=root -network 192.168.0.0 -mask 255.255.255.0
|
|
||||||
```
|
|
||||||
on the nfs server, I created the Mailu directory (in fact I copied a working Mailu set-up)
|
|
||||||
```bash
|
|
||||||
$mkdir /mnt/Pool1/pv/mailu
|
|
||||||
```
|
|
||||||
|
|
||||||
On your manager node, mount the nfs share to check that the share is available:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~ $ sudo mount -t nfs 192.168.0.30:/mnt/Pool1/pv/mailu /mnt/local/
|
|
||||||
```
|
|
||||||
If this is ok, you can umount it:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~ $ sudo umount /mnt/local/
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## Networking mode
|
|
||||||
On this example, we are using:
|
|
||||||
- the mesh routing mode (default mode). With this mode, each service is given a virtual IP address and docker manages the routing between this virtual IP and the container(s) providing this service.
|
|
||||||
- the default ingress mode.
|
|
||||||
|
|
||||||
### Allow authentification with the mesh routing
|
|
||||||
In order to allow every (front & webmail) container to access the other services, we will use the variable SUBNET.
|
|
||||||
|
|
||||||
Let's create the mailu_default network:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~ $ docker network create -d overlay --attachable mailu_default
|
|
||||||
core@coreos-01 ~ $ docker network inspect mailu_default | grep Subnet
|
|
||||||
"Subnet": "10.0.1.0/24",
|
|
||||||
```
|
|
||||||
In the docker-compose.yml file, we will then use SUBNET = 10.0.1.0/24
|
|
||||||
In fact, imap & smtp logs doesn't show the IPs from the front(s) container(s), but the IP of "mailu_default-endpoint". So it is sufficient to set SUBNET to this specific ip (which can be found by inspecting mailu_default network). The issue is that this endpoint is created while the stack is created, I did'nt figure a way to determine this IP before the stack creation...
|
|
||||||
|
|
||||||
### Limitation with the ingress mode
|
|
||||||
With the default ingress mode, the front(s) container(s) will see origin IP(s) all being 10.255.0.x (which is the ingress-endpoint, can be found by inspecting the ingress network)
|
|
||||||
|
|
||||||
This issue is known and discussed here:
|
|
||||||
|
|
||||||
https://github.com/moby/moby/issues/25526
|
|
||||||
|
|
||||||
A workaround (using network host mode and global deployment) is discussed here:
|
|
||||||
|
|
||||||
https://github.com/moby/moby/issues/25526#issuecomment-336363408
|
|
||||||
|
|
||||||
### Don't create an open relay !
|
|
||||||
As a side effect of this ingress mode "feature", make sure that the ingress subnet is not in your RELAYHOST, otherwise you would create an smtp open relay :-(
|
|
||||||
|
|
||||||
|
|
||||||
## Scalability
|
|
||||||
- smtp and imap are scalable
|
|
||||||
- front and webmail are scalable (pending SUBNET is used), although the let's encrypt magic might not like it (race condidtion ? or risk to be banned by let's encrypt server if too many front containers attemps to renew the certs at the same time)
|
|
||||||
- redis, antispam, antivirus, fetchmail, admin, webdav have not been tested (hence replicas=1 in the following docker-compose.yml file)
|
|
||||||
|
|
||||||
## Variable substitution and docker-compose.yml
|
|
||||||
The docker stack deploy command doesn't support variable substitution in the .yml file itself. As a consequence, we need to use the following work-around:
|
|
||||||
``` echo "$(docker-compose -f /mnt/docker/apps/mailu/docker-compose.yml config 2>/dev/null)" | docker stack deploy -c- mailu ```
|
|
||||||
|
|
||||||
We need also to:
|
|
||||||
- change the way we define the volumes (nfs share in our case)
|
|
||||||
- add a deploy section for every service
|
|
||||||
- the way the ports are defined for the front service
|
|
||||||
|
|
||||||
## Docker compose
|
|
||||||
An example of docker-compose-stack.yml file is available here:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
|
|
||||||
version: '3.2'
|
|
||||||
|
|
||||||
services:
|
|
||||||
|
|
||||||
front:
|
|
||||||
image: mailu/nginx:$VERSION
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
ports:
|
|
||||||
- target: 80
|
|
||||||
published: 80
|
|
||||||
- target: 443
|
|
||||||
published: 443
|
|
||||||
- target: 110
|
|
||||||
published: 110
|
|
||||||
- target: 143
|
|
||||||
published: 143
|
|
||||||
- target: 993
|
|
||||||
published: 993
|
|
||||||
- target: 995
|
|
||||||
published: 995
|
|
||||||
- target: 25
|
|
||||||
published: 25
|
|
||||||
- target: 465
|
|
||||||
published: 465
|
|
||||||
- target: 587
|
|
||||||
published: 587
|
|
||||||
volumes:
|
|
||||||
# - "$ROOT/certs:/certs"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_certs
|
|
||||||
target: /certs
|
|
||||||
deploy:
|
|
||||||
replicas: 2
|
|
||||||
|
|
||||||
redis:
|
|
||||||
image: redis:alpine
|
|
||||||
restart: always
|
|
||||||
volumes:
|
|
||||||
# - "$ROOT/redis:/data"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_redis
|
|
||||||
target: /data
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
|
|
||||||
imap:
|
|
||||||
image: mailu/dovecot:$VERSION
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
volumes:
|
|
||||||
# - "$ROOT/mail:/mail"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_mail
|
|
||||||
target: /mail
|
|
||||||
# - "$ROOT/overrides:/overrides"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_overrides
|
|
||||||
target: /overrides
|
|
||||||
depends_on:
|
|
||||||
- front
|
|
||||||
deploy:
|
|
||||||
replicas: 2
|
|
||||||
|
|
||||||
smtp:
|
|
||||||
image: mailu/postfix:$VERSION
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
environment:
|
|
||||||
- SUBNET=10.0.1.0/24
|
|
||||||
volumes:
|
|
||||||
# - "$ROOT/overrides:/overrides"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_overrides
|
|
||||||
target: /overrides
|
|
||||||
depends_on:
|
|
||||||
- front
|
|
||||||
deploy:
|
|
||||||
replicas: 2
|
|
||||||
|
|
||||||
antispam:
|
|
||||||
image: mailu/rspamd:$VERSION
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
environment:
|
|
||||||
- SUBNET=10.0.1.0/24
|
|
||||||
depends_on:
|
|
||||||
- front
|
|
||||||
volumes:
|
|
||||||
# - "$ROOT/filter:/var/lib/rspamd"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_filter
|
|
||||||
target: /var/lib/rspamd
|
|
||||||
# - "$ROOT/dkim:/dkim"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_dkim
|
|
||||||
target: /dkim
|
|
||||||
# - "$ROOT/overrides/rspamd:/etc/rspamd/override.d"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_overrides_rspamd
|
|
||||||
target: /etc/rspamd/override.d
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
|
|
||||||
antivirus:
|
|
||||||
image: mailu/none:$VERSION
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
volumes:
|
|
||||||
# - "$ROOT/filter:/data"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_filter
|
|
||||||
target: /data
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
|
|
||||||
webdav:
|
|
||||||
image: mailu/none:$VERSION
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
volumes:
|
|
||||||
# - "$ROOT/dav:/data"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_dav
|
|
||||||
target: /data
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
|
|
||||||
admin:
|
|
||||||
image: mailu/admin:$VERSION
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
environment:
|
|
||||||
- SUBNET=10.0.1.0/24
|
|
||||||
volumes:
|
|
||||||
# - "$ROOT/data:/data"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_data
|
|
||||||
target: /data
|
|
||||||
# - "$ROOT/dkim:/dkim"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_dkim
|
|
||||||
target: /dkim
|
|
||||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
|
||||||
depends_on:
|
|
||||||
- redis
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
|
|
||||||
webmail:
|
|
||||||
image: mailu/roundcube:$VERSION
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
volumes:
|
|
||||||
# - "$ROOT/webmail:/data"
|
|
||||||
- type: volume
|
|
||||||
source: mailu_data
|
|
||||||
target: /data
|
|
||||||
depends_on:
|
|
||||||
- imap
|
|
||||||
deploy:
|
|
||||||
replicas: 2
|
|
||||||
|
|
||||||
fetchmail:
|
|
||||||
image: mailu/fetchmail:$VERSION
|
|
||||||
restart: always
|
|
||||||
env_file: .env
|
|
||||||
volumes:
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
|
|
||||||
networks:
|
|
||||||
default:
|
|
||||||
external:
|
|
||||||
name: mailu_default
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
mailu_filter:
|
|
||||||
driver_opts:
|
|
||||||
type: "nfs"
|
|
||||||
o: "addr=192.168.0.30,soft,rw"
|
|
||||||
device: ":/mnt/Pool1/pv/mailu/filter"
|
|
||||||
mailu_dkim:
|
|
||||||
driver_opts:
|
|
||||||
type: "nfs"
|
|
||||||
o: "addr=192.168.0.30,soft,rw"
|
|
||||||
device: ":/mnt/Pool1/pv/mailu/dkim"
|
|
||||||
mailu_overrides_rspamd:
|
|
||||||
driver_opts:
|
|
||||||
type: "nfs"
|
|
||||||
o: "addr=192.168.0.30,soft,rw"
|
|
||||||
device: ":/mnt/Pool1/pv/mailu/overrides/rspamd"
|
|
||||||
mailu_data:
|
|
||||||
driver_opts:
|
|
||||||
type: "nfs"
|
|
||||||
o: "addr=192.168.0.30,soft,rw"
|
|
||||||
device: ":/mnt/Pool1/pv/mailu/data"
|
|
||||||
mailu_mail:
|
|
||||||
driver_opts:
|
|
||||||
type: "nfs"
|
|
||||||
o: "addr=192.168.0.30,soft,rw"
|
|
||||||
device: ":/mnt/Pool1/pv/mailu/mail"
|
|
||||||
mailu_overrides:
|
|
||||||
driver_opts:
|
|
||||||
type: "nfs"
|
|
||||||
o: "addr=192.168.0.30,soft,rw"
|
|
||||||
device: ":/mnt/Pool1/pv/mailu/overrides"
|
|
||||||
mailu_dav:
|
|
||||||
driver_opts:
|
|
||||||
type: "nfs"
|
|
||||||
o: "addr=192.168.0.30,soft,rw"
|
|
||||||
device: ":/mnt/Pool1/pv/mailu/dav"
|
|
||||||
mailu_certs:
|
|
||||||
driver_opts:
|
|
||||||
type: "nfs"
|
|
||||||
o: "addr=192.168.0.30,soft,rw"
|
|
||||||
device: ":/mnt/Pool1/pv/mailu/certs"
|
|
||||||
mailu_redis:
|
|
||||||
driver_opts:
|
|
||||||
type: "nfs"
|
|
||||||
o: "addr=192.168.0.30,soft,rw"
|
|
||||||
device: ":/mnt/Pool1/pv/mailu/redis"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Deploy Mailu on the docker swarm
|
|
||||||
Run the following command:
|
|
||||||
```bash
|
|
||||||
echo "$(docker-compose -f /mnt/docker/apps/mailu/docker-compose.yml config 2>/dev/null)" | docker stack deploy -c- mailu
|
|
||||||
```
|
|
||||||
See how the services are being deployed:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~ $ docker service ls
|
|
||||||
ID NAME MODE REPLICAS IMAGE PORTS
|
|
||||||
ywnsetmtkb1l mailu_antivirus replicated 1/1 mailu/none:master
|
|
||||||
pqokiaz0q128 mailu_fetchmail replicated 1/1 mailu/fetchmail:master
|
|
||||||
```
|
|
||||||
check a specific service:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~ $ docker service ps mailu_fetchmail
|
|
||||||
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
|
|
||||||
tbu8ppgsdffj mailu_fetchmail.1 mailu/fetchmail:master coreos-01 Running Running 11 days ago
|
|
||||||
```
|
|
||||||
|
|
||||||
## Remove the stack
|
|
||||||
Run the following command:
|
|
||||||
```bash
|
|
||||||
core@coreos-01 ~ $ docker stack rm mailu
|
|
||||||
```
|
|
@ -1,139 +0,0 @@
|
|||||||
{% set env='mailu.env' %}
|
|
||||||
# This file is auto-generated by the Mailu configuration wizard.
|
|
||||||
# Please read the documentation before attempting any change.
|
|
||||||
# Generated for {{ flavor }} flavor
|
|
||||||
|
|
||||||
version: '3.6'
|
|
||||||
|
|
||||||
services:
|
|
||||||
|
|
||||||
# External dependencies
|
|
||||||
redis:
|
|
||||||
image: redis:alpine
|
|
||||||
volumes:
|
|
||||||
- "{{ root }}/redis:/data"
|
|
||||||
|
|
||||||
# Core services
|
|
||||||
front:
|
|
||||||
image: ${DOCKER_ORG:-mailu}/${DOCKER_PREFIX:-}nginx:${MAILU_VERSION:-{{ version }}}
|
|
||||||
env_file: {{ env }}
|
|
||||||
logging:
|
|
||||||
driver: {{ log_driver or 'json-file' }}
|
|
||||||
ports:
|
|
||||||
{% for port in (80, 443, 25, 465, 587, 110, 995, 143, 993) %}
|
|
||||||
- target: {{ port }}
|
|
||||||
published: {{ port }}
|
|
||||||
mode: overlay
|
|
||||||
{% endfor %}
|
|
||||||
volumes:
|
|
||||||
- "{{ root }}/certs:/certs"
|
|
||||||
- "{{ root }}/overrides/nginx:/overrides:ro"
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
|
|
||||||
admin:
|
|
||||||
image: ${DOCKER_ORG:-mailu}/${DOCKER_PREFIX:-}admin:${MAILU_VERSION:-{{ version }}}
|
|
||||||
env_file: {{ env }}
|
|
||||||
{% if not admin_enabled %}
|
|
||||||
ports:
|
|
||||||
- 127.0.0.1:8080:80
|
|
||||||
{% endif %}
|
|
||||||
volumes:
|
|
||||||
- "{{ root }}/data:/data"
|
|
||||||
- "{{ root }}/dkim:/dkim"
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
healthcheck:
|
|
||||||
disable: true
|
|
||||||
|
|
||||||
imap:
|
|
||||||
image: ${DOCKER_ORG:-mailu}/${DOCKER_PREFIX:-}dovecot:${MAILU_VERSION:-{{ version }}}
|
|
||||||
env_file: {{ env }}
|
|
||||||
volumes:
|
|
||||||
- "{{ root }}/mail:/mail"
|
|
||||||
- "{{ root }}/overrides/dovecot:/overrides:ro"
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
healthcheck:
|
|
||||||
disable: true
|
|
||||||
|
|
||||||
smtp:
|
|
||||||
image: ${DOCKER_ORG:-mailu}/${DOCKER_PREFIX:-}postfix:${MAILU_VERSION:-{{ version }}}
|
|
||||||
env_file: {{ env }}
|
|
||||||
volumes:
|
|
||||||
- "{{ root }}/mailqueue:/queue"
|
|
||||||
- "{{ root }}/overrides/postfix:/overrides:ro"
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
healthcheck:
|
|
||||||
disable: true
|
|
||||||
|
|
||||||
antispam:
|
|
||||||
image: ${DOCKER_ORG:-mailu}/${DOCKER_PREFIX:-}rspamd:${MAILU_VERSION:-{{ version }}}
|
|
||||||
hostname: antispam
|
|
||||||
env_file: {{ env }}
|
|
||||||
volumes:
|
|
||||||
- "{{ root }}/filter:/var/lib/rspamd"
|
|
||||||
- "{{ root }}/overrides/rspamd:/etc/rspamd/override.d:ro"
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
healthcheck:
|
|
||||||
disable: true
|
|
||||||
|
|
||||||
# Optional services
|
|
||||||
{% if antivirus_enabled %}
|
|
||||||
antivirus:
|
|
||||||
image: ${DOCKER_ORG:-mailu}/${DOCKER_PREFIX:-}clamav:${MAILU_VERSION:-{{ version }}}
|
|
||||||
env_file: {{ env }}
|
|
||||||
volumes:
|
|
||||||
- "{{ root }}/filter:/data"
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
healthcheck:
|
|
||||||
disable: true
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
{% if webdav_enabled %}
|
|
||||||
webdav:
|
|
||||||
image: ${DOCKER_ORG:-mailu}/${DOCKER_PREFIX:-}radicale:${MAILU_VERSION:-{{ version }}}
|
|
||||||
env_file: {{ env }}
|
|
||||||
volumes:
|
|
||||||
- "{{ root }}/dav:/data"
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
healthcheck:
|
|
||||||
disable: true
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
{% if fetchmail_enabled %}
|
|
||||||
fetchmail:
|
|
||||||
image: ${DOCKER_ORG:-mailu}/${DOCKER_PREFIX:-}fetchmail:${MAILU_VERSION:-{{ version }}}
|
|
||||||
env_file: {{ env }}
|
|
||||||
volumes:
|
|
||||||
- "{{ root }}/data/fetchmail:/data"
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
healthcheck:
|
|
||||||
disable: true
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
{% if webmail_type != 'none' %}
|
|
||||||
webmail:
|
|
||||||
image: ${DOCKER_ORG:-mailu}/${DOCKER_PREFIX:-}webmail:${MAILU_VERSION:-{{ version }}}
|
|
||||||
env_file: {{ env }}
|
|
||||||
volumes:
|
|
||||||
- "{{ root }}/webmail:/data"
|
|
||||||
- "{{ root }}/overrides/{{ webmail_type }}:/overrides:ro"
|
|
||||||
deploy:
|
|
||||||
replicas: 1
|
|
||||||
healthcheck:
|
|
||||||
disable: true
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
networks:
|
|
||||||
default:
|
|
||||||
driver: overlay
|
|
||||||
ipam:
|
|
||||||
driver: default
|
|
||||||
config:
|
|
||||||
- subnet: {{ subnet }}
|
|
@ -1 +0,0 @@
|
|||||||
../compose/mailu.env
|
|
@ -1,65 +0,0 @@
|
|||||||
{% import "macros.html" as macros %}
|
|
||||||
|
|
||||||
{% call macros.panel("info", "Step 1 - Download your configuration files") %}
|
|
||||||
<p>Docker Stack expects a project file, named <code>docker-compose.yml</code>
|
|
||||||
in a project directory. First create your project directory.</p>
|
|
||||||
|
|
||||||
<pre><code>mkdir -p {{ root }}/{redis,certs,data,data/fetchmail,dkim,mail,mailqueue,overrides/rspamd,overrides/postfix,overrides/dovecot,overrides/nginx,filter,dav,webmail}
|
|
||||||
</pre></code>
|
|
||||||
|
|
||||||
<p>Then download the project file. A side configuration file makes it easier
|
|
||||||
to read and check the configuration variables generated by the wizard.</p>
|
|
||||||
|
|
||||||
<pre><code>cd {{ root }}
|
|
||||||
wget {{ url_for('.file', uid=uid, _scheme='https', filepath='docker-compose.yml', _external=True) }}
|
|
||||||
wget {{ url_for('.file', uid=uid, _scheme='https', filepath='mailu.env', _external=True) }}
|
|
||||||
</pre></code>
|
|
||||||
{% endcall %}
|
|
||||||
|
|
||||||
|
|
||||||
{% call macros.panel("info", "Step 2 - Review the configuration") %}
|
|
||||||
<p>We did not insert any malicious code on purpose in the configurations we
|
|
||||||
distribute, but your download could have been intercepted, or our wizard
|
|
||||||
website could have been compromised, so make sure you check the configuration
|
|
||||||
files before going any further.</p>
|
|
||||||
|
|
||||||
<p>When you are done checking them, check them one last time.</p>
|
|
||||||
{% endcall %}
|
|
||||||
|
|
||||||
{% call macros.panel("info", "Step 3 - Deploy docker stack") %}
|
|
||||||
<p>To deploy the docker stack use the following commands. For more information about setting up docker swarm nodes read the
|
|
||||||
<a href="https://docs.docker.com/get-started">docker documentation</a></p>
|
|
||||||
|
|
||||||
<pre><code>cd {{ root }}
|
|
||||||
docker swarm init
|
|
||||||
docker stack deploy -c docker-compose.yml mailu
|
|
||||||
</pre></code>
|
|
||||||
|
|
||||||
In the docker stack deploy command, mailu is the app name. Feel free to change it.<br/>
|
|
||||||
In order to display the running container you can use<br/>
|
|
||||||
<pre><code>docker ps</code></pre>
|
|
||||||
or
|
|
||||||
<pre><code>docker stack ps --no-trunc mailu</code></pre>
|
|
||||||
Command for removing docker stack is
|
|
||||||
<pre><code>docker stack rm mailu</code></pre>
|
|
||||||
|
|
||||||
Before you can use Mailu, you must create the primary administrator user account. This should be {{ postmaster }}@{{ domain }}. Use the following command, changing PASSWORD to your liking:
|
|
||||||
|
|
||||||
<pre><code>docker exec $(docker ps | grep admin | cut -d ' ' -f1) flask mailu admin {{ postmaster }} {{ domain }} PASSWORD
|
|
||||||
</pre></code>
|
|
||||||
|
|
||||||
<p>Login to the admin interface to change the password for a safe one, at
|
|
||||||
{% if admin_enabled %}
|
|
||||||
one of the hostnames
|
|
||||||
<a href="https://{{ hostnames.split(',')[0] }}{{ admin_path }}">{{ hostnames.split(',')[0] }}{{ admin_path }}</a>.
|
|
||||||
{% else %}
|
|
||||||
<a href="http://127.0.0.1:8080/ui">http://127.0.0.1:8080/ui</a> (only directly from the host running docker).
|
|
||||||
If you run mailu on a remote server, and wish to access the admin interface via a SSH tunnel, you can create a port-forward from your local machine to your server like
|
|
||||||
<pre><code>ssh -L 127.0.0.1:8080:127.0.0.1:8080 <user>@<server>
|
|
||||||
</code></pre>
|
|
||||||
And access the above URL from your local machine.
|
|
||||||
<br />
|
|
||||||
{% endif %}
|
|
||||||
Also, choose the "Update password" option in the left menu.
|
|
||||||
</p>
|
|
||||||
{% endcall %}
|
|
@ -1,13 +0,0 @@
|
|||||||
{% call macros.panel("info", "Step 1 - Pick a flavor") %}
|
|
||||||
<p>Mailu comes in multiple "flavors". It was originally
|
|
||||||
designed to run on top of Docker Compose but now offers multiple options
|
|
||||||
including Docker Stack, Rancher, Kubernetes.</p>
|
|
||||||
<p>Please note that "official" support, that is provided by the most active
|
|
||||||
developers will mostly cover Compose and Stack, while other flavors are
|
|
||||||
maintained by specific contributors.</p>
|
|
||||||
|
|
||||||
<div class="radio">
|
|
||||||
{{ macros.radio("flavor", "compose", "Compose", "simply using Docker Compose manager", flavor) }}
|
|
||||||
{{ macros.radio("flavor", "stack", "Stack", "using stack deployments in a Swarm cluster", flavor) }}
|
|
||||||
</div>
|
|
||||||
{% endcall %}
|
|
@ -1,62 +0,0 @@
|
|||||||
{% call macros.panel("info", "Step 3 - Pick some features") %}
|
|
||||||
<p>Mailu comes with multiple base features, including a specific admin
|
|
||||||
interface, Web email clients, antispam, antivirus, etc.
|
|
||||||
In this section you can enable the services to you liking.</p>
|
|
||||||
|
|
||||||
<!-- Switched from radio buttons to dropdown menu in order to remove the checkbox -->
|
|
||||||
<p>A Webmail is a Web interface exposing an email client. Mailu webmails are
|
|
||||||
bound to the internal IMAP and SMTP server for users to access their mailbox through
|
|
||||||
the Web. By exposing a complex application such as a Webmail, you should be aware of
|
|
||||||
the security implications caused by such an increase of attack surface.<p>
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Enable Web email client (and path to the Web email client)</label>
|
|
||||||
<br/>
|
|
||||||
<select class="btn btn-primary dropdown-toggle" name="webmail_type" id="webmail">
|
|
||||||
{% for webmailtype in ["none", "roundcube", "snappymail"] %}
|
|
||||||
<option value="{{ webmailtype }}" >{{ webmailtype }}</option>
|
|
||||||
{% endfor %}
|
|
||||||
</select>
|
|
||||||
<p></p>
|
|
||||||
<div class="input-group">
|
|
||||||
<input class="form-control" type="text" name="webmail_path" id="webmail_path" style="display: none">
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
|
|
||||||
<div class="form-check form-check-inline">
|
|
||||||
<label class="form-check-label">
|
|
||||||
<input class="form-check-input" type="checkbox" name="antivirus_enabled" value="clamav">
|
|
||||||
Enable the antivirus service
|
|
||||||
</label>
|
|
||||||
|
|
||||||
<i>An antivirus server helps fighting large scale virus spreading campaigns that leverage
|
|
||||||
e-mail for initial infection. Make sure that you have at least 1GB of memory for ClamAV to
|
|
||||||
load its signature database.</i>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
|
|
||||||
<div class="form-check form-check-inline">
|
|
||||||
<label class="form-check-label">
|
|
||||||
<input class="form-check-input" type="checkbox" name="webdav_enabled" value="radicale">
|
|
||||||
Enable the webdav service
|
|
||||||
</label>
|
|
||||||
|
|
||||||
<i>A Webdav server exposes a Dav interface over HTTP so that clients can store
|
|
||||||
contacts or calendars using the mail account.</i>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
|
|
||||||
<div class="form-check form-check-inline">
|
|
||||||
<label class="form-check-label">
|
|
||||||
<input class="form-check-input" type="checkbox" name="fetchmail_enabled" value="true">
|
|
||||||
Enable fetchmail
|
|
||||||
</label>
|
|
||||||
|
|
||||||
<i>Fetchmail allows users to retrieve mail from an external mail-server via IMAP/POP3 and puts it in their inbox.</i>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
|
|
||||||
|
|
||||||
<script type="text/javascript" src="{{ url_for('static', filename='render.js') }}"></script>
|
|
||||||
|
|
||||||
{% endcall %}
|
|
@ -1,24 +0,0 @@
|
|||||||
{% call macros.panel("info", "Step 4 - expose Mailu to the world") %}
|
|
||||||
<p>A mail server must be exposed to the world to receive emails, send emails,
|
|
||||||
and let users access their mailboxes. Mailu has some flexibility in the way
|
|
||||||
you expose it to the world.</p>
|
|
||||||
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Subnet of the docker network. This should not conflict with any networks to which your system is connected. (Internal and external!)</label>
|
|
||||||
<input class="form-control" type="text" name="subnet" required pattern="^([0-9]{1,3}\.){3}[0-9]{1,3}(\/([0-9]|[1-2][0-9]|3[0-2]))$"
|
|
||||||
value="192.168.203.0/24">
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<p>You server will be available under a main hostname but may expose multiple public
|
|
||||||
hostnames. Every e-mail domain that points to this server must have one of the
|
|
||||||
hostnames in its <code>MX</code> record. Hostnames must be comma-separated. If you're having
|
|
||||||
trouble accessing your admin interface, make sure it is the first entry here (and possibly the
|
|
||||||
same as your <code>DOMAIN</code> entry from earlier.</p>
|
|
||||||
|
|
||||||
<div class="form-group">
|
|
||||||
<label>Public hostnames</label>
|
|
||||||
<!-- Validates hostname or list of hostnames -->
|
|
||||||
<input class="form-control" type="text" name="hostnames" placeholder="my.host.name,other.host.name" multiple required
|
|
||||||
pattern="^(?:(?:\w+(?:-+\w+)*\.)+[a-z]+)*(?:,(?:(?:\w+(?:-+\w+)*\.)+[a-z]+)\s*)*$">
|
|
||||||
</div>
|
|
||||||
{% endcall %}
|
|
@ -0,0 +1,4 @@
|
|||||||
|
Remove HOST_* variables, use *_ADDRESS everywhere instead. Please note that those should only contain a FQDN (no port number).
|
||||||
|
Derive a different key for admin/SECRET_KEY; this will invalidate existing sessions
|
||||||
|
Ensure that rspamd starts after clamav
|
||||||
|
Only display a single HOSTNAME on the client configuration page
|
@ -0,0 +1 @@
|
|||||||
|
Remove postfix's master.pid on startup if there is no other instance running
|
@ -0,0 +1 @@
|
|||||||
|
Implement Header authentication via external proxy
|
@ -0,0 +1 @@
|
|||||||
|
Make quotas adjustable in 50MiB increments
|
@ -0,0 +1 @@
|
|||||||
|
Create a GUI for WILDCARD_SENDERS
|
@ -0,0 +1 @@
|
|||||||
|
Fix a bug preventing users without IMAP access to access the webmails
|
@ -0,0 +1 @@
|
|||||||
|
Add Snuffleupagus to protect webmails (a Suhosin replacement)
|
@ -0,0 +1 @@
|
|||||||
|
Upgrade to Alpine 3.17.0
|
@ -0,0 +1 @@
|
|||||||
|
Autofocus the login form on /sso/login
|
@ -0,0 +1,47 @@
|
|||||||
|
--- plugins/managesieve/lib/Roundcube/rcube_sieve_engine.php
|
||||||
|
+++ plugins/managesieve/lib/Roundcube/rcube_sieve_engine.php
|
||||||
|
@@ -529,28 +529,13 @@
|
||||||
|
// get request size limits (#1488648)
|
||||||
|
$max_post = max([
|
||||||
|
ini_get('max_input_vars'),
|
||||||
|
- ini_get('suhosin.request.max_vars'),
|
||||||
|
- ini_get('suhosin.post.max_vars'),
|
||||||
|
]);
|
||||||
|
- $max_depth = max([
|
||||||
|
- ini_get('suhosin.request.max_array_depth'),
|
||||||
|
- ini_get('suhosin.post.max_array_depth'),
|
||||||
|
- ]);
|
||||||
|
|
||||||
|
// check request size limit
|
||||||
|
if ($max_post && count($_POST, COUNT_RECURSIVE) >= $max_post) {
|
||||||
|
rcube::raise_error([
|
||||||
|
'code' => 500, 'file' => __FILE__, 'line' => __LINE__,
|
||||||
|
'message' => "Request size limit exceeded (one of max_input_vars/suhosin.request.max_vars/suhosin.post.max_vars)"
|
||||||
|
- ], true, false
|
||||||
|
- );
|
||||||
|
- $this->rc->output->show_message('managesieve.filtersaveerror', 'error');
|
||||||
|
- }
|
||||||
|
- // check request depth limits
|
||||||
|
- else if ($max_depth && count($_POST['_header']) > $max_depth) {
|
||||||
|
- rcube::raise_error([
|
||||||
|
- 'code' => 500, 'file' => __FILE__, 'line' => __LINE__,
|
||||||
|
- 'message' => "Request size limit exceeded (one of suhosin.request.max_array_depth/suhosin.post.max_array_depth)"
|
||||||
|
], true, false
|
||||||
|
);
|
||||||
|
$this->rc->output->show_message('managesieve.filtersaveerror', 'error');
|
||||||
|
--- program/lib/Roundcube/bootstrap.php
|
||||||
|
+++ program/lib/Roundcube/bootstrap.php
|
||||||
|
@@ -32,13 +32,11 @@
|
||||||
|
// Some users are not using Installer, so we'll check some
|
||||||
|
// critical PHP settings here. Only these, which doesn't provide
|
||||||
|
// an error/warning in the logs later. See (#1486307).
|
||||||
|
- 'mbstring.func_overload' => 0,
|
||||||
|
];
|
||||||
|
|
||||||
|
// check these additional ini settings if not called via CLI
|
||||||
|
if (php_sapi_name() != 'cli') {
|
||||||
|
$config += [
|
||||||
|
- 'suhosin.session.encrypt' => false,
|
||||||
|
'file_uploads' => true,
|
||||||
|
'session.auto_start' => false,
|
||||||
|
'zlib.output_compression' => false,
|
@ -0,0 +1,134 @@
|
|||||||
|
# This is based on default configuration file for Snuffleupagus (https://snuffleupagus.rtfd.io),
|
||||||
|
# for php8.
|
||||||
|
# It contains "reasonable" defaults that won't break your websites,
|
||||||
|
# and a lot of commented directives that you can enable if you want to
|
||||||
|
# have a better protection.
|
||||||
|
|
||||||
|
# Harden the PRNG
|
||||||
|
sp.harden_random.enable();
|
||||||
|
|
||||||
|
# Disabled XXE
|
||||||
|
sp.xxe_protection.enable();
|
||||||
|
|
||||||
|
# Global configuration variables
|
||||||
|
sp.global.secret_key("{{ SNUFFLEUPAGUS_KEY }}");
|
||||||
|
|
||||||
|
# Globally activate strict mode
|
||||||
|
# https://www.php.net/manual/en/language.types.declarations.php#language.types.declarations.strict
|
||||||
|
sp.global_strict.enable();
|
||||||
|
|
||||||
|
# Prevent unserialize-related exploits
|
||||||
|
# sp.unserialize_hmac.enable();
|
||||||
|
|
||||||
|
# Only allow execution of read-only files. This is a low-hanging fruit that you should enable.
|
||||||
|
sp.readonly_exec.enable();
|
||||||
|
|
||||||
|
# PHP has a lot of wrappers, most of them aren't usually useful, you should
|
||||||
|
# only enable the ones you're using.
|
||||||
|
sp.wrappers_whitelist.list("file,php,phar,mailsosubstreams");
|
||||||
|
|
||||||
|
# Prevent sloppy comparisons.
|
||||||
|
sp.sloppy_comparison.enable();
|
||||||
|
|
||||||
|
# Use SameSite on session cookie
|
||||||
|
# https://snuffleupagus.readthedocs.io/features.html#protection-against-cross-site-request-forgery
|
||||||
|
sp.cookie.name("PHPSESSID").samesite("lax");
|
||||||
|
|
||||||
|
# Harden the `chmod` function (0777 (oct = 511, 0666 = 438)
|
||||||
|
sp.disable_function.function("chmod").param("permissions").value("438").drop();
|
||||||
|
sp.disable_function.function("chmod").param("permissions").value("511").drop();
|
||||||
|
|
||||||
|
# Prevent various `mail`-related vulnerabilities
|
||||||
|
sp.disable_function.function("mail").param("additional_parameters").value_r("\\-").drop();
|
||||||
|
|
||||||
|
# Since it's now burned, me might as well mitigate it publicly
|
||||||
|
sp.disable_function.function("putenv").param("assignment").value_r("LD_").drop()
|
||||||
|
|
||||||
|
# This one was burned in Nov 2019 - https://gist.github.com/LoadLow/90b60bd5535d6c3927bb24d5f9955b80
|
||||||
|
sp.disable_function.function("putenv").param("assignment").value_r("GCONV_").drop()
|
||||||
|
|
||||||
|
# Since people are stupid enough to use `extract` on things like $_GET or $_POST, we might as well mitigate this vector
|
||||||
|
sp.disable_function.function("extract").param("array").value_r("^_").drop()
|
||||||
|
sp.disable_function.function("extract").param("flags").value("0").drop()
|
||||||
|
|
||||||
|
# This is also burned:
|
||||||
|
# ini_set('open_basedir','..');chdir('..');…;chdir('..');ini_set('open_basedir','/');echo(file_get_contents('/etc/passwd'));
|
||||||
|
# Since we have no way of matching on two parameters at the same time, we're
|
||||||
|
# blocking calls to open_basedir altogether: nobody is using it via ini_set anyway.
|
||||||
|
# Moreover, there are non-public bypasses that are also using this vector ;)
|
||||||
|
sp.disable_function.function("ini_set").param("option").value_r("open_basedir").drop()
|
||||||
|
|
||||||
|
# Prevent various `include`-related vulnerabilities
|
||||||
|
sp.disable_function.function("require_once").value_r("\.(inc|phtml|php)$").allow();
|
||||||
|
sp.disable_function.function("include_once").value_r("\.(inc|phtml|php)$").allow();
|
||||||
|
sp.disable_function.function("require").value_r("\.(inc|phtml|php)$").allow();
|
||||||
|
sp.disable_function.function("include").value_r("\.(inc|phtml|php)$").allow();
|
||||||
|
sp.disable_function.function("require_once").drop()
|
||||||
|
sp.disable_function.function("include_once").drop()
|
||||||
|
sp.disable_function.function("require").drop()
|
||||||
|
sp.disable_function.function("include").drop()
|
||||||
|
|
||||||
|
# Prevent `system`-related injections
|
||||||
|
sp.disable_function.function("system").param("command").value_r("[$|;&`\\n\\(\\)\\\\]").drop();
|
||||||
|
sp.disable_function.function("shell_exec").param("command").value_r("[$|;&`\\n\\(\\)\\\\]").drop();
|
||||||
|
sp.disable_function.function("exec").param("command").value_r("[$|;&`\\n\\(\\)\\\\]").drop();
|
||||||
|
# This is **very** broad but doing better is non-straightforward
|
||||||
|
sp.disable_function.function("proc_open").param("command").value_r("^gpg ").allow();
|
||||||
|
sp.disable_function.function("proc_open").param("command").value_r("[$|;&`\\n\\(\\)\\\\]").drop();
|
||||||
|
|
||||||
|
# Prevent runtime modification of interesting things
|
||||||
|
sp.disable_function.function("ini_set").param("option").value("assert.active").drop();
|
||||||
|
sp.disable_function.function("ini_set").param("option").value("zend.assertions").drop();
|
||||||
|
sp.disable_function.function("ini_set").param("option").value("memory_limit").drop();
|
||||||
|
sp.disable_function.function("ini_set").param("option").value("include_path").drop();
|
||||||
|
sp.disable_function.function("ini_set").param("option").value("open_basedir").drop();
|
||||||
|
|
||||||
|
# Detect some backdoors via environment recon
|
||||||
|
sp.disable_function.function("ini_get").filename("/var/www/roundcube/vendor/guzzlehttp/guzzle/src/functions.php").param("option").value("allow_url_fopen").allow();
|
||||||
|
sp.disable_function.function("ini_get").param("option").value("allow_url_fopen").drop();
|
||||||
|
sp.disable_function.function("ini_get").param("option").value("open_basedir").drop();
|
||||||
|
sp.disable_function.function("ini_get").param("option").value_r("suhosin").drop();
|
||||||
|
sp.disable_function.function("function_exists").param("function").value("eval").drop();
|
||||||
|
sp.disable_function.function("function_exists").param("function").value("exec").drop();
|
||||||
|
sp.disable_function.function("function_exists").param("function").value("system").drop();
|
||||||
|
sp.disable_function.function("function_exists").param("function").value("shell_exec").drop();
|
||||||
|
sp.disable_function.function("function_exists").param("function").value("proc_open").drop();
|
||||||
|
sp.disable_function.function("function_exists").param("function").value("passthru").drop();
|
||||||
|
sp.disable_function.function("is_callable").param("value").value("eval").drop();
|
||||||
|
sp.disable_function.function("is_callable").param("value").value("exec").drop();
|
||||||
|
sp.disable_function.function("is_callable").param("value").value("system").drop();
|
||||||
|
sp.disable_function.function("is_callable").param("value").value("shell_exec").drop();
|
||||||
|
sp.disable_function.function("is_callable").filename_r("^/var/www/snappymail/snappymail/v/\d+\.\d+\.\d+/app/libraries/snappymail/pgp/gpg\.php$").param("value").value("proc_open").allow();
|
||||||
|
sp.disable_function.function("is_callable").param("value").value("proc_open").drop();
|
||||||
|
sp.disable_function.function("is_callable").param("value").value("passthru").drop();
|
||||||
|
|
||||||
|
# Ghetto error-based sqli detection
|
||||||
|
#sp.disable_function.function("mysql_query").ret("FALSE").drop();
|
||||||
|
#sp.disable_function.function("mysqli_query").ret("FALSE").drop();
|
||||||
|
#sp.disable_function.function("PDO::query").ret("FALSE").drop();
|
||||||
|
|
||||||
|
# Ensure that certificates are properly verified
|
||||||
|
sp.disable_function.function("curl_setopt").param("value").value("1").allow();
|
||||||
|
sp.disable_function.function("curl_setopt").param("value").value("2").allow();
|
||||||
|
# `81` is SSL_VERIFYHOST and `64` SSL_VERIFYPEER
|
||||||
|
sp.disable_function.function("curl_setopt").param("option").value("64").drop().alias("Please don't turn CURLOPT_SSL_VERIFYCLIENT off.");
|
||||||
|
sp.disable_function.function("curl_setopt").param("option").value("81").drop().alias("Please don't turn CURLOPT_SSL_VERIFYHOST off.");
|
||||||
|
|
||||||
|
# File upload
|
||||||
|
sp.disable_function.function("move_uploaded_file").param("to").value_r("\\.ph").drop();
|
||||||
|
sp.disable_function.function("move_uploaded_file").param("to").value_r("\\.ht").drop();
|
||||||
|
|
||||||
|
# Logging lockdown
|
||||||
|
sp.disable_function.function("ini_set").param("option").value_r("error_log").drop()
|
||||||
|
sp.disable_function.function("ini_set").param("option").value_r("display_errors").drop()
|
||||||
|
|
||||||
|
sp.auto_cookie_secure.enable();
|
||||||
|
# TODO: consider encrypting the cookies?
|
||||||
|
# TODO: ensure this is up to date
|
||||||
|
sp.cookie.name("roundcube_sessauth").samesite("strict");
|
||||||
|
sp.cookie.name("roundcube_sessid").samesite("strict");
|
||||||
|
sp.ini_protection.policy_silent_fail();
|
||||||
|
|
||||||
|
# roundcube uses unserialize() everywhere.
|
||||||
|
# This should do the job until https://github.com/jvoisin/snuffleupagus/issues/438 is implemented.
|
||||||
|
sp.disable_function.function("unserialize").param("data").value_r("[cCoO]:\d+:[\"{]").drop();
|
Loading…
Reference in New Issue