Compare commits

105 Commits

Author SHA1 Message Date
6619246346 fix no password 2025-08-06 00:04:26 +02:00
4d127a57e2 inscription utilisateur complète 2025-07-28 17:07:01 +02:00
6e58f328e4 Merge branch 'master' into feat/python 2025-07-26 15:45:22 +02:00
813e0e761f typo 2025-07-26 15:41:38 +02:00
2e62e9782e mattermost canal creation comptes 2025-07-26 15:40:15 +02:00
9f0b8f2e1e paheko standby 2025-07-26 15:39:49 +02:00
fc4adc0fae mattermost first launch 2025-07-26 15:32:54 +02:00
74812fa79a mattermost admin user/pass 2025-07-26 14:45:27 +02:00
490c527d9a smallies 2025-07-26 14:41:31 +02:00
3220d862a6 fix mattermost 2025-07-26 13:52:15 +02:00
51cd89c16d smallies 2025-07-25 15:55:30 +02:00
1936326535 fix default matterport 2025-07-25 15:52:47 +02:00
a630e47bfe fix mattermost pgsql 2025-07-25 15:10:43 +02:00
b4eee312df python 2025-07-25 14:49:46 +02:00
27ca4dfce3 init python 2025-07-24 21:47:54 +02:00
33fc237cb8 fix traefik 2025-07-17 18:26:36 +02:00
ed5ef23ed2 fix traefik 2025-07-17 17:23:13 +02:00
6f33808736 fix vm 2025-07-17 16:13:30 +02:00
nom
477a9155fe upgrade traefik to 3.4.4 2025-07-16 06:43:16 +02:00
bce3b9eff5 Nouveau service spip ! 2025-07-05 00:36:43 +02:00
d506f000a3 grafana: save current dashboards
Add custom dashboards, remove unused ones.
2025-06-25 22:57:36 +02:00
nom
8906974a83 upgrade MM en 10.9.1 2025-06-23 17:23:16 +02:00
nom
c12cafc277 empêche les écho dans interoPahko (pour éviter les mails) 2025-06-20 09:33:55 +02:00
nom
f268f5f5f4 corrige tty sur createUser sur les docker exec maintenenat qu'on est en cron 2025-06-20 09:22:55 +02:00
nom
d8bc48ec3a vire echo "Rien à créer" 2025-06-20 07:01:42 +02:00
3940c3801d fin de la commande Setup 2025-06-19 23:43:58 +02:00
00f9e3ee5f ajout de laposte.net 2025-06-19 21:18:18 +02:00
nom
1bacfd307c vire snappymail de NC car plus supporté actuellemment 2025-06-18 08:10:08 +02:00
nom
8f6913565c rollback, il faut mariadb 11.4 pour les version de NC en cours 2025-06-15 09:57:41 +02:00
nom
62b34e4ac0 mariadb = mariaddb:latest 2025-06-15 09:55:01 +02:00
nom
70c32de959 tente une image mariadb latest sur docker-compose de wp et cloud !
rejoute MARIADB_AUTO_UPGRADE=1 dans docker-compose
2025-06-15 09:52:49 +02:00
nom
3eedd4293b je tente un mariadb:latest pour gitea, pas taper :-p 2025-06-15 09:09:24 +02:00
nom
a2f737eb46 upgrade la base mariadb en auto 2025-06-15 09:03:54 +02:00
nom
82a3440d5a upgrade mariadb to 11.8.2 2025-06-15 08:58:51 +02:00
a3e86ac6ac dmarc sympa 2025-06-11 09:38:19 +02:00
556471d321 doc certbot 2025-06-10 09:45:52 +02:00
9d666afab5 certbot dns chall pour alwaysdata 2025-06-10 09:43:17 +02:00
5eb4ccb58e mastodon 2025-06-06 14:51:27 +02:00
nom
84849b71b1 mets les logs traefik dans un volume
affiches les erreurs 4xx (pour utilisation d'un fail2ban sur le host)
2025-06-06 09:59:41 +02:00
nom
316206140a suppr --no-pdf-header-footer du chromium headless pour impression pdf 2025-05-27 10:46:34 +02:00
nom
7cc7df6ac1 upgrade MM 2025-05-21 19:47:14 +02:00
nom
0d1c13d125 upgrade MM to 10.8 2025-05-20 07:07:34 +02:00
nom
cb9a449882 upgrade de séu sur paheko 2025-05-16 16:16:28 +02:00
nom
678388afaa maj paheko en 1.3.14 2025-05-13 14:13:25 +02:00
016b47774b prometheus: portainer ids pour grafana
Ajout des ids portainer de chaque machine pour générer une url.
2025-05-11 22:00:07 +02:00
nom
6db4d1a5a8 ajout les logs pour les erreurs 404 403 401 renvoyées par le reverse 2025-05-11 09:32:07 +02:00
f54de7a26c update css 2025-05-10 16:52:20 +02:00
nom
75678ca093 enlève l'url en dur 2025-05-10 10:00:42 +02:00
nom
554d7a5ddc upgrade mailServer en 15.0.2 2025-05-10 09:39:17 +02:00
62e75a42f2 mastodon passwords WIP 2025-05-09 16:52:04 +02:00
nom
4a6b575ce0 maj traefik 3.4.0 2025-05-08 18:50:45 +02:00
8d83a2716b correction du mail 2025-05-07 08:12:57 +02:00
nom
4807624dbc corrige url pour kazkouil 2025-05-02 13:05:42 +02:00
nom
b5aa7e9945 je remets sur le git la version de prod1 2025-05-02 11:55:40 +02:00
nom
8d0caad3c7 simplifie la conf prometheus 2025-05-02 11:47:09 +02:00
nom
87b007d4b9 ajout label pour cadvisor 2025-05-02 11:45:39 +02:00
7852e82e74 env cadvisor 2025-04-30 15:23:17 +02:00
9b92276fc1 settings cadvisor 2025-04-30 15:11:24 +02:00
nom
e39ce5518c corrige monitoring (cadvisor passe par traefik) 2025-04-29 23:36:14 +02:00
nom
ea6e48886d maj clean acme.json 2025-04-25 11:07:53 +02:00
4187f4b772 add cadvisor/prometheus/grafana + dashboards 2025-04-24 23:10:06 +02:00
nom
b00916ceba maj nettoye acme en prenant la bonne IP du srv 2025-04-24 14:35:44 +02:00
nom
f95b959bf2 maj nettoyement 2025-04-24 00:27:47 +02:00
nom
609b5c1d62 maj nettoyer_acme_json 2025-04-24 00:16:51 +02:00
nom
a6a20e0dea jq, c'est une tuerie ! 2025-04-24 00:03:30 +02:00
nom
821335e1ca init nettoyage acme.json des certifs LE pour traefik 2025-04-23 22:33:03 +02:00
nom
e31c75d8b1 upgrade MM en 10.7 2025-04-22 16:28:57 +02:00
nom
c041bac532 upgrade traefik 2025-04-21 09:05:18 +02:00
8eb33813d6 date du fichier 2025-04-20 17:57:31 +02:00
faf2e2bc8e add dyn DNS 2025-04-20 10:51:20 +02:00
adc0528c81 peertube 2025-04-20 09:34:17 +02:00
1259857474 add peertube 2025-04-19 17:10:33 +02:00
db684d4ebd sympa ssl 2025-04-19 16:59:09 +02:00
df657bb035 challenge acme traefik 2025-04-19 16:56:03 +02:00
5d8634c8df sympa traefik 2025-04-19 16:40:16 +02:00
c55e984918 Merge branch 'master' of ssh://git.kaz.bzh:2202/KAZ/KazV2 2025-04-19 14:23:14 +02:00
4b95553be0 certificats et webmail 2025-04-19 14:23:06 +02:00
1f8520db90 webmail 2025-04-19 13:49:22 +02:00
9de98c4021 correction bug des alias 2025-04-18 15:16:55 +02:00
85b8048aa9 certificats pour mail et listes 2025-04-18 13:36:44 +02:00
0bf808f0cf mails en minuscule 2025-04-16 19:32:01 +02:00
nom
1609e7725f ajoute AD account 2025-04-11 09:27:46 +02:00
nom
6bd95d1056 on passe de gandi à alwaysdata pour les dns ! 2025-04-11 09:25:05 +02:00
nom
07f8ef8151 maj dns_alwaysdata.sh 2025-04-06 00:38:16 +02:00
nom
aad57eafae maj dns AD 2025-04-05 21:19:36 +02:00
4370436c42 modif pour officeurl 2025-03-25 00:33:01 +01:00
79c52c2067 ajout mastodon 2025-03-24 19:33:04 +01:00
nom
d341122676 init api pour alwaysdata 2025-03-19 19:05:37 +01:00
nom
93a929d291 upgrade mm en 10.6 2025-03-19 10:44:09 +01:00
5d6e46bb37 volumes mastodon 2025-03-17 08:33:18 +01:00
545ed42968 volume images mastodon 2025-03-17 08:31:21 +01:00
53ba95b9d3 env mastodon 2025-03-15 10:26:57 +01:00
61f4629d1f ajout backup mastodon 2025-03-14 22:47:36 +01:00
b7bb45869a mastodon traefik streaming 2025-03-14 17:44:29 +01:00
888c614bdd doc mastodon 2025-03-14 17:37:17 +01:00
16683616c1 suite mastodon 2025-03-14 17:04:23 +01:00
c613184594 bootstrap mastodon 2025-03-14 16:58:02 +01:00
aaf3d9343e paheko fix gd/webp 2025-03-12 09:41:17 +01:00
nom
e8fdead666 workaround pour paheko install gd avec dompdf 2025-03-12 08:46:50 +01:00
b28c04928b ajout webmail 2025-03-10 22:23:54 +01:00
nom
286b2fa144 maj Mailserver 2025-03-10 21:52:15 +01:00
nom
6a7fd829e5 maj Dockerfile pour install gd 2025-03-10 21:48:14 +01:00
nom
5f20548e21 upgrade 2025-03-10 21:05:01 +01:00
b0dd373a00 date 2025-03-10 15:54:01 +01:00
6eec84f2ab refonte exnvoi des mails 2025-03-10 15:53:10 +01:00
79 changed files with 32789 additions and 257 deletions

View File

@@ -85,6 +85,7 @@ done
-e "s|__VIGILO_HOST__|${vigiloHost}|g"\
-e "s|__WEBMAIL_HOST__|${webmailHost}|g"\
-e "s|__CASTOPOD_HOST__|${castopodHost}|g"\
-e "s|__SPIP_HOST__|${spipHost}|g"\
-e "s|__IMAPSYNC_HOST__|${imapsyncHost}|g"\
-e "s|__YAKFORMS_HOST__|${yakformsHost}|g"\
-e "s|__WORDPRESS_HOST__|${wordpressHost}|g"\

View File

@@ -0,0 +1,24 @@
#/bin/bash
# certbot certonly --manual --preferred-challenges=dns --manual-auth-hook certbot-dns-alwaysdata.sh --manual-cleanup-hook certbot-dns-alwaysdata.sh -d "*.kaz.bzh" -d "kaz.bzh"
ALWAYSDATA_TOKEN="TOKEN"
ALWAYSDATA_ACCOUNT="ACCOUNT"
ALWAYSDATA_API="https://api.alwaysdata.com/v1/"
DOMAIN_ID=$(curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" ${ALWAYSDATA_API}/domain/?name=${CERTBOT_DOMAIN} | jq '.[0].id')
add_record(){
RECORD_ID=$(curl -s -X POST -d "{\"domain\":\"${DOMAIN_ID}\", \"type\":\"TXT\", \"name\":\"_acme-challenge\", \"value\":\"${CERTBOT_VALIDATION}\"}" --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/")
}
del_record(){
RECORD_ID=$(curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/?name=_acme-challenge&type=TXT&domain=${DOMAIN_ID}" | jq ".[0].id")
curl -s -X DELETE --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/${RECORD_ID}/"
}
if [ -z ${CERTBOT_AUTH_OUTPUT} ]; then
add_record
else
del_record
fi

View File

@@ -8,6 +8,9 @@
# Did : 13 fevrier 2025 modif des save en postgres et mysql
# Did : ajout des sauvegardes de mobilizon et mattermost en postgres
# 20/04/2025
# Did : Ajout des sauvegardes de peertube dans les services generaux
# En cas d'absence de postfix, il faut lancer :
# docker network create postfix_mailNet
@@ -16,8 +19,7 @@
# sauvegarde la base de données d'un compose
# met à jours les paramètres de configuration du mandataire (proxy)
#KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
KAZ_ROOT=/kaz
KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
@@ -221,6 +223,14 @@ saveComposes () {
echo "save mobilizon"
saveDB ${mobilizonDBName} "${mobilizon_POSTGRES_USER}" "${mobilizon_POSTGRES_PASSWORD}" "${mobilizon_POSTGRES_DB}" mobilizon postgres
;;
peertube)
echo "save peertube"
saveDB ${peertubeDBName} "${peertube_POSTGRES_USER}" "${peertube_POSTGRES_PASSWORD}" "${PEERTUBE_DB_HOSTNAME}" peertube postgres
;;
mastodon)
echo "save mastodon"
saveDB ${mastodonDBName} "${mastodon_POSTGRES_USER}" "${mastodon_POSTGRES_PASSWORD}" "${mastodon_POSTGRES_DB}" mastodon postgres
;;
roundcube)
echo "save roundcube"
saveDB ${roundcubeDBName} "${roundcube_MYSQL_USER}" "${roundcube_MYSQL_PASSWORD}" "${roundcube_MYSQL_DATABASE}" roundcube mysql

5
bin/createUser.py Executable file
View File

@@ -0,0 +1,5 @@
#!/usr/bin/python3
from lib.user import create_users_from_file
create_users_from_file()

View File

@@ -41,8 +41,6 @@ cd "${KAZ_ROOT}"
# DOCK_DIR="${KAZ_COMP_DIR}" # ???
SETUP_MAIL="docker exec -ti mailServ setup"
# on détermine le script appelant, le fichier log et le fichier source, tous issus de la même racine
PRG=$(basename $0)
RACINE=${PRG%.sh}
@@ -210,15 +208,6 @@ done
echo "numero,nom,quota_disque,action_auto" > "${TEMP_PAHEKO}"
echo "curl \"https://${paheko_API_USER}:${paheko_API_PASSWORD}@kaz-paheko.kaz.bzh/api/user/import\" -T \"${TEMP_PAHEKO}\"" >> "${CMD_PAHEKO}"
#echo "récupération des login postfix... "
## on stocke les emails et les alias KAZ déjà créés
#(
# ${SETUP_MAIL} email list
# ${SETUP_MAIL} alias list
#) | cut -d ' ' -f 2 | grep @ | sort > "${TFILE_EMAIL}"
# did on supprime le ^M en fin de fichier pour pas faire planter les grep
#dos2unix "${TFILE_EMAIL}"
echo "on récupère tous les emails (secours/alias/kaz) sur le ldap"
FILE_LDIF=/home/sauve/ldap.ldif
/kaz/bin/ldap/ldap_sauve.sh
@@ -226,13 +215,13 @@ gunzip ${FILE_LDIF}.gz -f
grep -aEiorh '([[:alnum:]]+([._-][[:alnum:]]+)*@[[:alnum:]]+([._-][[:alnum:]]+)*\.[[:alpha:]]{2,6})' ${FILE_LDIF} | sort -u > ${TFILE_EMAIL}
echo "récupération des login mattermost... "
docker exec -ti mattermostServ bin/mmctl user list --all | grep ":.*(" | cut -d ':' -f 2 | cut -d ' ' -f 2 | sort > "${TFILE_MM}"
docker exec -i mattermostServ bin/mmctl user list --all | grep ":.*(" | cut -d ':' -f 2 | cut -d ' ' -f 2 | sort > "${TFILE_MM}"
dos2unix "${TFILE_MM}"
echo "done"
# se connecter à l'agora pour ensuite pouvoir passer toutes les commandes mmctl
echo "docker exec -ti mattermostServ bin/mmctl auth login ${httpProto}://${URL_AGORA} --name local-server --username ${mattermost_user} --password ${mattermost_pass}" | tee -a "${CMD_INIT}"
echo "docker exec -i mattermostServ bin/mmctl auth login ${httpProto}://${URL_AGORA} --name local-server --username ${mattermost_user} --password ${mattermost_pass}" | tee -a "${CMD_INIT}"
# vérif des emails
regex="^(([A-Za-z0-9]+((\.|\-|\_|\+)?[A-Za-z0-9]?)*[A-Za-z0-9]+)|[A-Za-z0-9]+)@(([A-Za-z0-9]+)+((\.|\-|\_)?([A-Za-z0-9]+)+)*)+\.([A-Za-z]{2,})+$"
@@ -379,8 +368,6 @@ while read ligne; do
else
SEND_MSG_CREATE=true
echo "${EMAIL_SOUHAITE} n'existe pas" | tee -a "${LOG}"
echo "${SETUP_MAIL} email add ${EMAIL_SOUHAITE} ${PASSWORD}" | tee -a "${CMD_LOGIN}"
echo "${SETUP_MAIL} quota set ${EMAIL_SOUHAITE} ${QUOTA}G" | tee -a "${CMD_LOGIN}"
# LDAP, à tester
user=$(echo ${EMAIL_SOUHAITE} | awk -F '@' '{print $1}')
domain=$(echo ${EMAIL_SOUHAITE} | awk -F '@' '{print $2}')
@@ -597,11 +584,11 @@ userPassword: {CRYPT}${pass}\n\n' | ldapmodify -c -H ldap://${LDAP_IP} -D \"cn=$
echo "${IDENT_KAZ} existe déjà sur mattermost" | tee -a "${LOG}"
else
# on créé le compte mattermost
echo "docker exec -ti mattermostServ bin/mmctl user create --email ${EMAIL_SOUHAITE} --username ${IDENT_KAZ} --password ${PASSWORD}" | tee -a "${CMD_LOGIN}"
echo "docker exec -i mattermostServ bin/mmctl user create --email ${EMAIL_SOUHAITE} --username ${IDENT_KAZ} --password ${PASSWORD}" | tee -a "${CMD_LOGIN}"
# et enfin on ajoute toujours le user à l'équipe KAZ et aux 2 channels publiques
echo "docker exec -ti mattermostServ bin/mmctl team users add kaz ${EMAIL_SOUHAITE}" | tee -a "${CMD_LOGIN}"
echo "docker exec -ti mattermostServ bin/mmctl channel users add kaz:une-question--un-soucis ${EMAIL_SOUHAITE}" | tee -a "${CMD_LOGIN}"
echo "docker exec -ti mattermostServ bin/mmctl channel users add kaz:cafe-du-commerce--ouvert-2424h ${EMAIL_SOUHAITE}" | tee -a "${CMD_LOGIN}"
echo "docker exec -i mattermostServ bin/mmctl team users add kaz ${EMAIL_SOUHAITE}" | tee -a "${CMD_LOGIN}"
echo "docker exec -i mattermostServ bin/mmctl channel users add kaz:une-question--un-soucis ${EMAIL_SOUHAITE}" | tee -a "${CMD_LOGIN}"
echo "docker exec -i mattermostServ bin/mmctl channel users add kaz:cafe-du-commerce--ouvert-2424h ${EMAIL_SOUHAITE}" | tee -a "${CMD_LOGIN}"
NB_SERVICES_BASE=$((NB_SERVICES_BASE+1))
fi
@@ -609,10 +596,10 @@ userPassword: {CRYPT}${pass}\n\n' | ldapmodify -c -H ldap://${LDAP_IP} -D \"cn=$
# l'équipe existe t-elle déjà ?
nb=$(docker exec mattermostServ bin/mmctl team list | grep -w "${EQUIPE_AGORA}" | wc -l)
if [ "${nb}" == "0" ];then # non, on la créé en mettant le user en admin de l'équipe
echo "docker exec -ti mattermostServ bin/mmctl team create --name ${EQUIPE_AGORA} --display_name ${EQUIPE_AGORA} --email ${EMAIL_SOUHAITE}" --private | tee -a "${CMD_INIT}"
echo "docker exec -i mattermostServ bin/mmctl team create --name ${EQUIPE_AGORA} --display_name ${EQUIPE_AGORA} --email ${EMAIL_SOUHAITE}" --private | tee -a "${CMD_INIT}"
fi
# puis ajouter le user à l'équipe
echo "docker exec -ti mattermostServ bin/mmctl team users add ${EQUIPE_AGORA} ${EMAIL_SOUHAITE}" | tee -a "${CMD_INIT}"
echo "docker exec -i mattermostServ bin/mmctl team users add ${EQUIPE_AGORA} ${EMAIL_SOUHAITE}" | tee -a "${CMD_INIT}"
fi
if [ -n "${CREATE_ORGA_SERVICES}" ]; then
@@ -629,16 +616,16 @@ userPassword: {CRYPT}${pass}\n\n' | ldapmodify -c -H ldap://${LDAP_IP} -D \"cn=$
# TODO : utiliser liste sur dev également
# on inscrit le user sur sympa, à la liste infos@${domain_sympa}
# docker exec -ti sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=https://listes.kaz.sns/sympasoap --trusted_application=SOAP_USER --trusted_application_password=SOAP_PASSWORD --proxy_vars="USER_EMAIL=contact1@kaz.sns" --service=which
# docker exec -i sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=https://listes.kaz.sns/sympasoap --trusted_application=SOAP_USER --trusted_application_password=SOAP_PASSWORD --proxy_vars="USER_EMAIL=contact1@kaz.sns" --service=which
if [[ "${mode}" = "dev" ]]; then
echo "# DEV, on teste l'inscription à sympa"| tee -a "${CMD_SYMPA}"
LISTMASTER=$(echo ${sympa_LISTMASTERS} | cut -d',' -f1)
echo "docker exec -ti sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${sympa_SOAP_USER} --trusted_application_password=${sympa_SOAP_PASSWORD} --proxy_vars=\"USER_EMAIL=${LISTMASTER}\" --service=add --service_parameters=\"${NL_LIST},${EMAIL_SOUHAITE}\"" | tee -a "${CMD_SYMPA}"
echo "docker exec -i sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${sympa_SOAP_USER} --trusted_application_password=${sympa_SOAP_PASSWORD} --proxy_vars=\"USER_EMAIL=${LISTMASTER}\" --service=add --service_parameters=\"${NL_LIST},${EMAIL_SOUHAITE}\"" | tee -a "${CMD_SYMPA}"
else
echo "# PROD, on inscrit à sympa"| tee -a "${CMD_SYMPA}"
LISTMASTER=$(echo ${sympa_LISTMASTERS} | cut -d',' -f1)
echo "docker exec -ti sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${sympa_SOAP_USER} --trusted_application_password=${sympa_SOAP_PASSWORD} --proxy_vars=\"USER_EMAIL=${LISTMASTER}\" --service=add --service_parameters=\"${NL_LIST},${EMAIL_SOUHAITE}\"" | tee -a "${CMD_SYMPA}"
echo "docker exec -ti sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${sympa_SOAP_USER} --trusted_application_password=${sympa_SOAP_PASSWORD} --proxy_vars=\"USER_EMAIL=${LISTMASTER}\" --service=add --service_parameters=\"${NL_LIST},${EMAIL_SECOURS}\"" | tee -a "${CMD_SYMPA}"
echo "docker exec -i sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${sympa_SOAP_USER} --trusted_application_password=${sympa_SOAP_PASSWORD} --proxy_vars=\"USER_EMAIL=${LISTMASTER}\" --service=add --service_parameters=\"${NL_LIST},${EMAIL_SOUHAITE}\"" | tee -a "${CMD_SYMPA}"
echo "docker exec -i sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${sympa_SOAP_USER} --trusted_application_password=${sympa_SOAP_PASSWORD} --proxy_vars=\"USER_EMAIL=${LISTMASTER}\" --service=add --service_parameters=\"${NL_LIST},${EMAIL_SECOURS}\"" | tee -a "${CMD_SYMPA}"
fi
if [ "${service[ADMIN_ORGA]}" == "O" ]; then
@@ -760,7 +747,7 @@ ${MAIL_KAZ}
EOF" | tee -a "${CMD_MSG}"
echo " # on envoie la confirmation d'inscription sur l'agora " | tee -a "${CMD_MSG}"
echo "docker exec -ti mattermostServ bin/mmctl post create kaz:Creation-Comptes --message \"${MAIL_KAZ}\"" | tee -a "${CMD_MSG}"
echo "docker exec -i mattermostServ bin/mmctl post create kaz:Creation-Comptes --message \"${MAIL_KAZ}\"" | tee -a "${CMD_MSG}"
# fin des inscriptions
done <<< "${ALL_LINES}"

View File

@@ -1,6 +1,11 @@
#!/bin/bash
#/bin/bash
# list/ajout/supprime/ un sous-domaine
#koi: gestion des records dns sur AlwaysData
#ki: fanch&gaël&fab
#kan: 06/04/2025
#doc: https://api.alwaysdata.com/v1/record/doc/
#doc: https://help.alwaysdata.com/fr/api/
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
@@ -15,6 +20,7 @@ export ETC_HOSTS="/etc/hosts"
# no more export in .env
export $(set | grep "domain=")
#TODO: récupérer la liste des services kaz au lieu des les écrire en dur
declare -a forbidenName
forbidenName=(${calcHost} calc ${cloudHost} bureau ${dateHost} date ${dokuwikiHost} dokuwiki ${fileHost} file ${ldapHost} ${pahekoHost} ${gitHost} ${gravHost} ${matterHost} ${officeHost} collabora ${padHost} ${sympaHost} listes ${webmailHost} ${wordpressHost} www ${vigiloHost} form)
@@ -31,6 +37,15 @@ usage(){
exit 1
}
. "${KAZ_KEY_DIR}/env-alwaysdata"
if [[ -z "${ALWAYSDATA_TOKEN}" ]] ; then
echo "no ALWAYSDATA_TOKEN set in ${KAZ_KEY_DIR}/env-alwaysdata"
usage
fi
DOMAIN_ID=$(curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" ${ALWAYSDATA_API}/domain/?name=${domain} | jq '.[0].id')
for ARG in $@
do
case "${ARG}" in
@@ -60,78 +75,15 @@ if [ -z "${CMD}" ]; then
usage
fi
. "${KAZ_KEY_DIR}/env-gandi"
if [[ -z "${GANDI_KEY}" ]] ; then
echo
echo "no GANDI_KEY set in ${KAZ_KEY_DIR}/env-gandi"
usage
fi
waitNet () {
if [[ "${domain}" = "kaz.local" ]]; then
return
fi
### wait when error code 503
if [[ $(curl -H "authorization: Apikey ${GANDI_KEY}" --connect-timeout 2 -s -D - "${GANDI_API}" -o /dev/null 2>/dev/null | head -n1) != *200* ]]; then
echo "DNS not available. Please wait..."
while [[ $(curl -H "authorization: Apikey ${GANDI_KEY}" --connect-timeout 2 -s -D - "${GANDI_API}" -o /dev/null 2>/dev/null | head -n1) != *200* ]]
do
sleep 5
done
exit
fi
}
list(){
if [[ "${domain}" = "kaz.local" ]]; then
grep --perl-regex "^${IP}\s.*${domain}" "${ETC_HOSTS}" 2> /dev/null | sed -e "s|^${IP}\s*\([0-9a-z.-]${domain}\)$|\1|g"
return
fi
waitNet
trap 'rm -f "${TMPFILE}"' EXIT
TMPFILE="$(mktemp)" || exit 1
if [[ -n "${SIMU}" ]] ; then
${SIMU} curl -X GET "${GANDI_API}/records" -H "authorization: Apikey ${GANDI_KEY}"
else
curl -X GET "${GANDI_API}/records" -H "authorization: Apikey ${GANDI_KEY}" 2>/dev/null | \
sed "s/,{/\n/g" | \
sed 's/.*rrset_name":"\([^"]*\)".*rrset_values":\["\([^"]*\)".*/\1:\2/g'| \
grep -v '^[_@]'| \
grep -e ":${domain}\.*$" -e ":prod[0-9]*$" > ${TMPFILE}
fi
if [ $# -lt 1 ]; then
cat ${TMPFILE}
else
for ARG in $@
do
cat ${TMPFILE} | grep "${ARG}.*:"
done
fi
TARGET=$@
LISTE=$(curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/?domain=${DOMAIN_ID}&type=CNAME&name=${TARGET}" | jq '.[] | "\(.name):\(.value)"')
echo ${LISTE}
}
saveDns () {
for ARG in $@ ; do
if [[ "${ARG}" =~ .local$ ]] ; then
echo "${PRG}: old fasion style (remove .local at the end)"
usage;
fi
if [[ "${ARG}" =~ .bzh$ ]] ; then
echo "${PRG}: old fasion style (remove .bzh at the end)"
usage;
fi
if [[ "${ARG}" =~ .dev$ ]] ; then
echo "${PRG}: old fasion style (remove .dev at the end)"
usage;
fi
done
if [[ "${domain}" = "kaz.local" ]]; then
return
fi
waitNet
${SIMU} curl -X POST "${GANDI_API}/snapshots" -H "authorization: Apikey ${GANDI_KEY}" 2>/dev/null
mkdir -p /root/dns
${SIMU} curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/?domain=${DOMAIN_ID}" -o /root/dns/dns_save_$(date +'%Y%m%d%H%M%S')
}
badName(){
@@ -154,28 +106,14 @@ add(){
echo "can't manage '${ARG}'. Use -f option"
continue
fi
case "${domain}" in
kaz.local )
if grep -q --perl-regex "^${IP}.*[ \t]${ARG}.${domain}" "${ETC_HOSTS}" 2> /dev/null ; then
break
fi
if grep -q --perl-regex "^${IP}[ \t]" "${ETC_HOSTS}" 2> /dev/null ; then
${SIMU} sudo sed -i -e "0,/^${IP}[ \t]/s/^\(${IP}[ \t]\)/\1${ARG}.${domain} /g" "${ETC_HOSTS}"
else
${SIMU} sudo sed -i -e "$ a ${IP}\t${ARG}.${domain}" "${ETC_HOSTS}" 2> /dev/null
fi
;;
*)
${SIMU} curl -X POST "${GANDI_API}/records" -H "authorization: Apikey ${GANDI_KEY}" -H 'content-type: application/json' -d '{"rrset_type":"CNAME", "rrset_name":"'${ARG}'", "rrset_values":["'${site}'"]}'
echo
;;
esac
${SIMU} curl -s -X POST -d "{\"domain\":\"${DOMAIN_ID}\", \"type\":\"CNAME\", \"name\":\"${ARG}\", \"value\":\"${site}.${domain}\"}" --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/"
ADDED+=("${ARG}")
done
echo "Domains added to ${domain}: ${ADDED[@]}"
}
}
del(){
if [ $# -lt 1 ]; then
exit
fi
@@ -187,23 +125,11 @@ del(){
echo "can't manage '${ARG}'. Use -f option"
continue
fi
case "${domain}" in
kaz.local )
if !grep -q --perl-regex "^${IP}.*[ \t]${ARG}.${domain}" "${ETC_HOSTS}" 2> /dev/null ; then
break
fi
${SIMU} sudo sed -i -e "/^${IP}[ \t]*${ARG}.${domain}[ \t]*$/d" \
-e "s|^\(${IP}.*\)[ \t]${ARG}.${domain}|\1|g" "${ETC_HOSTS}"
;;
* )
${SIMU} curl -X DELETE "${GANDI_API}/records/${ARG}" -H "authorization: Apikey ${GANDI_KEY}"
echo
;;
esac
RECORD_ID=$(curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/?name=${ARG}&type=CNAME&domain=${DOMAIN_ID}" | jq ".[] | select(.name==\"${ARG}\").id")
${SIMU} curl -s -X DELETE --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/${RECORD_ID}/"
REMOVED+=("${ARG}")
done
echo "Domains removed from ${domain}: ${REMOVED[@]}"
}
#echo "CMD: ${CMD} $*"
${CMD} $*

135
bin/dns_alwaysdata.sh Executable file
View File

@@ -0,0 +1,135 @@
#/bin/bash
#koi: gestion des records dns sur AlwaysData
#ki: fanch&gaël&fab
#kan: 06/04/2025
#doc: https://api.alwaysdata.com/v1/record/doc/
#doc: https://help.alwaysdata.com/fr/api/
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
. "${DOCKERS_ENV}"
cd "${KAZ_ROOT}"
export PRG="$0"
export IP="127.0.0.1"
export ETC_HOSTS="/etc/hosts"
# no more export in .env
export $(set | grep "domain=")
#TODO: récupérer la liste des services kaz au lieu des les écrire en dur
declare -a forbidenName
forbidenName=(${calcHost} calc ${cloudHost} bureau ${dateHost} date ${dokuwikiHost} dokuwiki ${fileHost} file ${ldapHost} ${pahekoHost} ${gitHost} ${gravHost} ${matterHost} ${officeHost} collabora ${padHost} ${sympaHost} listes ${webmailHost} ${wordpressHost} www ${vigiloHost} form)
export FORCE="NO"
export CMD=""
export SIMU=""
usage(){
echo "Usage: ${PRG} list [sub-domain...]"
echo " ${PRG} [-n] [-f] {add/del} sub-domain..."
echo " -h help"
echo " -n simulation"
echo " -f force protected domain"
exit 1
}
. "${KAZ_KEY_DIR}/env-alwaysdata"
if [[ -z "${ALWAYSDATA_TOKEN}" ]] ; then
echo "no ALWAYSDATA_TOKEN set in ${KAZ_KEY_DIR}/env-alwaysdata"
usage
fi
DOMAIN_ID=$(curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" ${ALWAYSDATA_API}/domain/?name=${domain} | jq '.[0].id')
for ARG in $@
do
case "${ARG}" in
'-h' | '-help' )
usage
;;
'-f' )
shift
export FORCE="YES"
;;
'-n' )
shift
export SIMU="echo"
;;
'list'|'add'|'del' )
shift
CMD="${ARG}"
break
;;
* )
usage
;;
esac
done
if [ -z "${CMD}" ]; then
usage
fi
list(){
TARGET=$@
LISTE=$(curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/?domain=${DOMAIN_ID}&type=CNAME&name=${TARGET}" | jq '.[] | "\(.name):\(.value)"')
echo ${LISTE}
}
saveDns () {
mkdir -p /root/dns
${SIMU} curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/?domain=${DOMAIN_ID}" -o /root/dns/dns_save_$(date +'%Y%m%d%H%M%S')
}
badName(){
[[ -z "$1" ]] && return 0;
for item in "${forbidenName[@]}"; do
[[ "${item}" == "$1" ]] && [[ "${FORCE}" == "NO" ]] && return 0
done
return 1
}
add(){
if [ $# -lt 1 ]; then
exit
fi
saveDns $@
declare -a ADDED
for ARG in $@
do
if badName "${ARG}" ; then
echo "can't manage '${ARG}'. Use -f option"
continue
fi
${SIMU} curl -s -X POST -d "{\"domain\":\"${DOMAIN_ID}\", \"type\":\"CNAME\", \"name\":\"${ARG}\", \"value\":\"${site}.${domain}\"}" --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/"
ADDED+=("${ARG}")
done
echo "Domains added to ${domain}: ${ADDED[@]}"
}
del(){
if [ $# -lt 1 ]; then
exit
fi
saveDns $@
declare -a REMOVED
for ARG in $@
do
if badName "${ARG}" ; then
echo "can't manage '${ARG}'. Use -f option"
continue
fi
RECORD_ID=$(curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/?name=${ARG}&type=CNAME&domain=${DOMAIN_ID}" | jq ".[] | select(.name==\"${ARG}\").id")
${SIMU} curl -s -X DELETE --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/${RECORD_ID}/"
REMOVED+=("${ARG}")
done
echo "Domains removed from ${domain}: ${REMOVED[@]}"
}
${CMD} $*

209
bin/dns_gandi.sh Executable file
View File

@@ -0,0 +1,209 @@
#!/bin/bash
# list/ajout/supprime/ un sous-domaine
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
. "${DOCKERS_ENV}"
cd "${KAZ_ROOT}"
export PRG="$0"
export IP="127.0.0.1"
export ETC_HOSTS="/etc/hosts"
# no more export in .env
export $(set | grep "domain=")
declare -a forbidenName
forbidenName=(${calcHost} calc ${cloudHost} bureau ${dateHost} date ${dokuwikiHost} dokuwiki ${fileHost} file ${ldapHost} ${pahekoHost} ${gitHost} ${gravHost} ${matterHost} ${officeHost} collabora ${padHost} ${sympaHost} listes ${webmailHost} ${wordpressHost} www ${vigiloHost} form)
export FORCE="NO"
export CMD=""
export SIMU=""
usage(){
echo "Usage: ${PRG} list [sub-domain...]"
echo " ${PRG} [-n] [-f] {add/del} sub-domain..."
echo " -h help"
echo " -n simulation"
echo " -f force protected domain"
exit 1
}
for ARG in $@
do
case "${ARG}" in
'-h' | '-help' )
usage
;;
'-f' )
shift
export FORCE="YES"
;;
'-n' )
shift
export SIMU="echo"
;;
'list'|'add'|'del' )
shift
CMD="${ARG}"
break
;;
* )
usage
;;
esac
done
if [ -z "${CMD}" ]; then
usage
fi
. "${KAZ_KEY_DIR}/env-gandi"
if [[ -z "${GANDI_KEY}" ]] ; then
echo
echo "no GANDI_KEY set in ${KAZ_KEY_DIR}/env-gandi"
usage
fi
waitNet () {
if [[ "${domain}" = "kaz.local" ]]; then
return
fi
### wait when error code 503
if [[ $(curl -H "authorization: Apikey ${GANDI_KEY}" --connect-timeout 2 -s -D - "${GANDI_API}" -o /dev/null 2>/dev/null | head -n1) != *200* ]]; then
echo "DNS not available. Please wait..."
while [[ $(curl -H "authorization: Apikey ${GANDI_KEY}" --connect-timeout 2 -s -D - "${GANDI_API}" -o /dev/null 2>/dev/null | head -n1) != *200* ]]
do
sleep 5
done
exit
fi
}
list(){
if [[ "${domain}" = "kaz.local" ]]; then
grep --perl-regex "^${IP}\s.*${domain}" "${ETC_HOSTS}" 2> /dev/null | sed -e "s|^${IP}\s*\([0-9a-z.-]${domain}\)$|\1|g"
return
fi
waitNet
trap 'rm -f "${TMPFILE}"' EXIT
TMPFILE="$(mktemp)" || exit 1
if [[ -n "${SIMU}" ]] ; then
${SIMU} curl -X GET "${GANDI_API}/records" -H "authorization: Apikey ${GANDI_KEY}"
else
curl -X GET "${GANDI_API}/records" -H "authorization: Apikey ${GANDI_KEY}" 2>/dev/null | \
sed "s/,{/\n/g" | \
sed 's/.*rrset_name":"\([^"]*\)".*rrset_values":\["\([^"]*\)".*/\1:\2/g'| \
grep -v '^[_@]'| \
grep -e ":${domain}\.*$" -e ":prod[0-9]*$" > ${TMPFILE}
fi
if [ $# -lt 1 ]; then
cat ${TMPFILE}
else
for ARG in $@
do
cat ${TMPFILE} | grep "${ARG}.*:"
done
fi
}
saveDns () {
for ARG in $@ ; do
if [[ "${ARG}" =~ .local$ ]] ; then
echo "${PRG}: old fasion style (remove .local at the end)"
usage;
fi
if [[ "${ARG}" =~ .bzh$ ]] ; then
echo "${PRG}: old fasion style (remove .bzh at the end)"
usage;
fi
if [[ "${ARG}" =~ .dev$ ]] ; then
echo "${PRG}: old fasion style (remove .dev at the end)"
usage;
fi
done
if [[ "${domain}" = "kaz.local" ]]; then
return
fi
waitNet
${SIMU} curl -X POST "${GANDI_API}/snapshots" -H "authorization: Apikey ${GANDI_KEY}" 2>/dev/null
}
badName(){
[[ -z "$1" ]] && return 0;
for item in "${forbidenName[@]}"; do
[[ "${item}" == "$1" ]] && [[ "${FORCE}" == "NO" ]] && return 0
done
return 1
}
add(){
if [ $# -lt 1 ]; then
exit
fi
saveDns $@
declare -a ADDED
for ARG in $@
do
if badName "${ARG}" ; then
echo "can't manage '${ARG}'. Use -f option"
continue
fi
case "${domain}" in
kaz.local )
if grep -q --perl-regex "^${IP}.*[ \t]${ARG}.${domain}" "${ETC_HOSTS}" 2> /dev/null ; then
break
fi
if grep -q --perl-regex "^${IP}[ \t]" "${ETC_HOSTS}" 2> /dev/null ; then
${SIMU} sudo sed -i -e "0,/^${IP}[ \t]/s/^\(${IP}[ \t]\)/\1${ARG}.${domain} /g" "${ETC_HOSTS}"
else
${SIMU} sudo sed -i -e "$ a ${IP}\t${ARG}.${domain}" "${ETC_HOSTS}" 2> /dev/null
fi
;;
*)
${SIMU} curl -X POST "${GANDI_API}/records" -H "authorization: Apikey ${GANDI_KEY}" -H 'content-type: application/json' -d '{"rrset_type":"CNAME", "rrset_name":"'${ARG}'", "rrset_values":["'${site}'"]}'
echo
;;
esac
ADDED+=("${ARG}")
done
echo "Domains added to ${domain}: ${ADDED[@]}"
}
del(){
if [ $# -lt 1 ]; then
exit
fi
saveDns $@
declare -a REMOVED
for ARG in $@
do
if badName "${ARG}" ; then
echo "can't manage '${ARG}'. Use -f option"
continue
fi
case "${domain}" in
kaz.local )
if !grep -q --perl-regex "^${IP}.*[ \t]${ARG}.${domain}" "${ETC_HOSTS}" 2> /dev/null ; then
break
fi
${SIMU} sudo sed -i -e "/^${IP}[ \t]*${ARG}.${domain}[ \t]*$/d" \
-e "s|^\(${IP}.*\)[ \t]${ARG}.${domain}|\1|g" "${ETC_HOSTS}"
;;
* )
${SIMU} curl -X DELETE "${GANDI_API}/records/${ARG}" -H "authorization: Apikey ${GANDI_KEY}"
echo
;;
esac
REMOVED+=("${ARG}")
done
echo "Domains removed from ${domain}: ${REMOVED[@]}"
}
#echo "CMD: ${CMD} $*"
${CMD} $*

176
bin/dynDNS.sh Executable file
View File

@@ -0,0 +1,176 @@
#!/bin/bash
# nohup /kaz/bin/dynDNS.sh &
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
. "${DOCKERS_ENV}"
# no more export in .env
export $(set | grep "domain=")
cd "${KAZ_ROOT}"
export PRG="$0"
export MYHOST="${site}"
MYIP_URL="https://kaz.bzh/myip.php"
DNS_IP=""
DELAI_WAIT=10 # DNS occupé
DELAI_GET=5 # min entre 2 requêtes
DELAI_CHANGE=3600 # propagation 1h
DELAI_NO_CHANGE=300 # pas de changement 5 min
BOLD='\e[1m'
RED='\e[0;31m'
GREEN='\e[0;32m'
YELLOW='\e[0;33m'
BLUE='\e[0;34m'
MAGENTA='\e[0;35m'
CYAN='\e[0;36m'
NC='\e[0m' # No Color
NL='
'
export VERBOSE=""
export SIMU=""
usage(){
echo "Usage: ${PRG} list [sub-domain...]"
echo " -h help"
echo " -v verbose"
echo " -n simulation"
exit 1
}
#. "${KAZ_KEY_DIR}/env-gandi"
. "${KAZ_KEY_DIR}/env-alwaysdata"
if [[ -z "${ALWAYSDATA_TOKEN}" ]] ; then
echo "no ALWAYSDATA_TOKEN set in ${KAZ_KEY_DIR}/env-alwaysdata"
usage
fi
DOMAIN_ID=$(curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" ${ALWAYSDATA_API}/domain/?name=${domain} | jq '.[0].id')
if [[ -z "${DOMAIN_ID}" ]] ; then
echo "no DOMAIN_ID give by alwaysdata"
usage
fi
# if [[ -z "${GANDI_KEY}" ]] ; then
# echo
# echo "no GANDI_KEY set in ${KAZ_KEY_DIR}/env-gandi"
# usage
# exit
# fi
for ARG in $@
do
case "${ARG}" in
'-h' | '-help' )
usage
;;
'-v' )
shift
export VERBOSE=":"
;;
'-n' )
shift
export SIMU="echo"
;;
* )
usage
;;
esac
done
log () {
echo -e "${BLUE}$(date +%d-%m-%Y-%H-%M-%S)${NC} : $*"
}
simu () {
echo -e "${YELLOW}$(date +%d-%m-%Y-%H-%M-%S)${NC} : $*"
}
cmdWait () {
#ex gandi
#curl -H "authorization: Apikey ${GANDI_KEY}" --connect-timeout 2 -s -D - -o /dev/null "${GANDI_API}" 2>/dev/null
curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" --connect-timeout 2 -D - -o /dev/null "${ALWAYSDATA_API}/record/?domain=${DOMAIN_ID}&type=CNAME&name=${TARGET}" 2>/dev/null
}
waitNet () {
### wait when error code 503
if [[ $(cmdWait | head -n1) != *200* ]]; then
log "DNS not available. Please wait..."
while [[ $(cmdWait | head -n1) != *200* ]]; do
[[ -z "${VERBOSE}" ]] || simu curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" --connect-timeout 2 -D - -o /dev/null "${ALWAYSDATA_API}/record/?domain=${DOMAIN_ID}&type=CNAME&name=${TARGET}"
sleep "${DELAI_WAIT}"
done
exit
fi
}
getDNS () {
# curl -s -X GET "${GANDI_API}/records" -H "authorization: Apikey ${GANDI_KEY}"|
# sed "s/,{/\n/g"|
# sed 's/.*rrset_name":"\([^"]*\)".*rrset_values":\["\([^"]*\)".*/\1:\2/g'|
# grep -e "^${MYHOST}:"|
# sed "s/^${MYHOST}://g" |
# tr -d '\n\t\r '
${SIMU} curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/?domain=${DOMAIN_ID}&type=A&name=${MYHOST}" | jq '.[] | "\(.value)"' | tr -d '"'
}
saveDns () {
mkdir -p /root/dns
${SIMU} curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/?domain=${DOMAIN_ID}" -o /root/dns/dns_save_$(date +'%Y%m%d%H%M%S')
}
setDNS () {
saveDns
# curl -s -X POST "${GANDI_API}/records" -H "authorization: Apikey ${GANDI_KEY}" -H 'content-type: application/json' -d '{"rrset_type":"A", "rrset_name":"'${MYHOST}'", "rrset_values":["'${IP}'"]}'
${SIMU} curl -s -X POST -d "{\"domain\":\"${DOMAIN_ID}\", \"type\":\"A\", \"name\":\"${MYHOST}\", \"value\":\"${IP}\"}" --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/"
}
while :; do
sleep "${DELAI_GET}"
IP=$(curl -s "${MYIP_URL}" | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}' | tr -d '\n\t\r ')
if ! [[ ${IP} =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
log "BAB IP ${IP}" ; continue
fi
if [ -z "${DNS_IP}" ]; then
# Variable pas encore initialisée
waitNet
DNS_IP=$(getDNS)
if [ -z "${DNS_IP}" ]; then
# C'est la première fois que le site est en prod
log "set ${MYHOST} : ${IP}"
setDNS
DNS_IP=$(getDNS)
log "DNS set ${MYHOST}:${IP} (=${DNS_IP})"
sleep "${DELAI_CHANGE}"
continue
fi
fi
if [ "${DNS_IP}" != "${IP}" ]; then
log "${MYHOST} : ${DNS_IP} must change to ${IP}"
# Changement d'adresse
waitNet
#curl -s -X DELETE "${GANDI_API}/records/${MYHOST}" -H "authorization: Apikey ${GANDI_KEY}"
RECORD_ID=$(curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/?name=${MYHOST}&type=A&domain=${DOMAIN_ID}" | jq ".[] | select(.name==\"${MYHOST}\").id")
${SIMU} curl -s -X DELETE --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/${RECORD_ID}/"
setDNS
DNS_IP=$(getDNS)
log "DNS reset ${MYHOST}:${IP} (=${DNS_IP})"
sleep "${DELAI_CHANGE}"
else
log "OK ${MYHOST}:${DNS_IP} / ${IP}"
sleep ${DELAI_NO_CHANGE}
fi
done

View File

@@ -23,7 +23,7 @@ PRG=$(basename $0)
# TEMPO_ACTION_STOP=2 # Lors de redémarrage avec tempo, on attend après le stop
# TEMPO_ACTION_START=60 # Lors de redémarrage avec tempo, avant de reload le proxy
# DEFAULTCONTAINERS="cloud agora wp wiki office paheko castopod"
# DEFAULTCONTAINERS="cloud agora wp wiki office paheko castopod spip"
# APPLIS_PAR_DEFAUT="tasks calendar contacts bookmarks mail richdocuments external drawio snappymail ransomware_protection" #rainloop richdocumentscode
@@ -42,16 +42,16 @@ CONTAINERS_TYPES=
declare -A DockerServNames # le nom des containers correspondant
DockerServNames=( [cloud]="${nextcloudServName}" [agora]="${mattermostServName}" [wiki]="${dokuwikiServName}" [wp]="${wordpressServName}" [office]="${officeServName}" [paheko]="${pahekoServName}" [castopod]="${castopodServName}" )
DockerServNames=( [cloud]="${nextcloudServName}" [agora]="${mattermostServName}" [wiki]="${dokuwikiServName}" [wp]="${wordpressServName}" [office]="${officeServName}" [paheko]="${pahekoServName}" [castopod]="${castopodServName}" [spip]="${spipServName}" )
declare -A FilterLsVolume # Pour trouver quel volume appartient à quel container
FilterLsVolume=( [cloud]="cloudMain" [agora]="matterConfig" [wiki]="wikiConf" [wp]="wordpress" [castopod]="castopodMedia" )
FilterLsVolume=( [cloud]="cloudMain" [agora]="matterConfig" [wiki]="wikiConf" [wp]="wordpress" [castopod]="castopodMedia" [spip]="spip")
declare -A composeDirs # Le nom du repertoire compose pour le commun
composeDirs=( [cloud]="cloud" [agora]="mattermost" [wiki]="dokuwiki" [office]="collabora" [paheko]="paheko" [castopod]="castopod" )
composeDirs=( [cloud]="cloud" [agora]="mattermost" [wiki]="dokuwiki" [office]="collabora" [paheko]="paheko" [castopod]="castopod" [spip]="spip")
declare -A serviceNames # Le nom du du service dans le dockerfile d'orga
serviceNames=( [cloud]="cloud" [agora]="agora" [wiki]="dokuwiki" [wp]="wordpress" [office]="collabora" [castopod]="castopod")
serviceNames=( [cloud]="cloud" [agora]="agora" [wiki]="dokuwiki" [wp]="wordpress" [office]="collabora" [castopod]="castopod" [spip]="spip")
declare -A subScripts
subScripts=( [cloud]="manageCloud.sh" [agora]="manageAgora.sh" [wiki]="manageWiki.sh" [wp]="manageWp.sh" [castopod]="manageCastopod.sh" )
@@ -93,6 +93,7 @@ CONTAINERS_TYPES
-office Les collabora
-paheko Le paheko
-castopod Les castopod
-spip Les spip
COMMANDES (on peut en mettre plusieurs dans l'ordre souhaité)
-I|--install L'initialisation du container
@@ -551,6 +552,8 @@ for ARG in "$@"; do
CONTAINERS_TYPES="${CONTAINERS_TYPES} paheko" ;;
'-pod'|'--pod'|'-castopod'|'--castopod')
CONTAINERS_TYPES="${CONTAINERS_TYPES} castopod" ;;
'-spip')
CONTAINERS_TYPES="${CONTAINERS_TYPES} spip" ;;
'-t' )
COMMANDS="${COMMANDS} RESTART-COMPOSE" ;;
'-r' )

View File

@@ -1,5 +1,7 @@
#!/bin/bash
# gestion des utilisateurs de kaz ( mail, cloud général, mattermost )
# Ki : Did
# koi : gestion globale des users Kaz mais aussi les users d'autres domaines hébergés
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. $KAZ_ROOT/bin/.commonFunctions.sh
@@ -8,7 +10,7 @@ setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
VERSION="5-12-2024"
VERSION="18-05-2025"
PRG=$(basename $0)
RACINE=$(echo $PRG | awk '{print $1}')
IFS=' '
@@ -968,9 +970,9 @@ updateUser() {
MAILALIAS_CHANGE=0
for VALMAIL in ${CONTENU_ATTRIBUT}
do
read -p " - On garde ${VALMAIL} (o/n) ? [o] : " READVALMAIL
read -p " - On garde ${VALMAIL} (o/n) [o] ? : " READVALMAIL
case ${READVALMAIL} in
* | "" | o | O )
"" | o | O )
NEW_CONTENU_ATTRIBUT="${NEW_CONTENU_ATTRIBUT} ${VALMAIL}"
;;
n | N )
@@ -1007,7 +1009,7 @@ updateUser() {
done
;;
"" | n | N )
#CHANGED+=([mailAlias]="${NEW_CONTENU_ATTRIBUT}")
CHANGED+=([mailAlias]="${NEW_CONTENU_ATTRIBUT}")
;;
* )
printKazMsg "Erreur"

18
bin/getX509Certificates.sh Executable file
View File

@@ -0,0 +1,18 @@
#/bin/bash
#koi: récupération des certifs traefik vers x509 pour mail et listes
#ki: fanch
#kan: 18/04/2025
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
. "${DOCKERS_ENV}"
certificates="mail listes"
for i in ${certificates}; do
jq -r ".letsencrypt.Certificates[] | select(.domain.main==\"${i}.${domain}\") | .certificate" /var/lib/docker/volumes/traefik_letsencrypt/_data/acme.json | base64 -d > /etc/ssl/certs/${i}.pem
jq -r ".letsencrypt.Certificates[] | select(.domain.main==\"${i}.${domain}\") | .key" /var/lib/docker/volumes/traefik_letsencrypt/_data/acme.json | base64 -d > /etc/ssl/private/${i}.key
chmod 600 /etc/ssl/private/${i}.key
done

View File

@@ -123,6 +123,8 @@ export DebugLog="${KAZ_ROOT}/log/log-install-$(date +%y-%m-%d-%T)-"
if [[ " ${DOCKERS_LIST[*]} " =~ " traefik " ]]; then
# on initialise traefik :-(
${KAZ_COMP_DIR}/traefik/first.sh
# on démarre traefik (plus lancé dans container.sh)
docker-compose -f ${KAZ_COMP_DIR}/traefik/docker-compose.yml up -d
fi
if [[ " ${DOCKERS_LIST[*]} " =~ " etherpad " ]]; then

View File

@@ -1,6 +1,7 @@
#!/bin/bash
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
@@ -76,6 +77,10 @@ Int_paheko_Action() {
do
eval $VAL_GAR=$(jq .$VAL_GAR ${TFILE_INT_PAHEKO_IDFILE})
done
################################
# test du mail valide en $domain
echo ${email} | grep -i "${domain}" || { echo "le mail ${email} n'est pas en ${domain}"; exit ;}
################################
#comme tout va bien on continue
#on compte le nom de champs dans la zone nom pour gérer les noms et prénoms composés
# si il y a 3 champs, on associe les 2 premieres valeurs avec un - et on laisse le 3ème identique
@@ -145,6 +150,9 @@ Int_paheko_Action() {
nc_base="N"
admin_orga="O"
fi
#On met le mail et le mail de secours en minuscules
email=$(echo $email | tr [:upper:] [:lower:])
email_secours=$(echo $email_secours | tr [:upper:] [:lower:])
# Pour le reste on renomme les null en N ( non ) et les valeurs 1 en O ( Oui)
cloud=$(echo $cloud | sed -e 's/0/N/g' | sed -e 's/1/O/g')
paheko=$(echo $garradin | sed -e 's/0/N/g' | sed -e 's/1/O/g')
@@ -155,11 +163,11 @@ Int_paheko_Action() {
echo "$nom_ok;$prenom_ok;$email;$email_secours;$nom_orga;$admin_orga;$cloud;$paheko;$wordpress;$agora;$docuwiki;$nc_base;$groupe_nc_base;$equipe_agora;$quota_disque">>${FILE_CREATEUSER}
done
else
echo "Rien à créer"
[ "$OPTION" = "silence" ] || echo "Rien à créer"
exit 2
fi
}
#Int_paheko_Action "A créer" "silence"
Int_paheko_Action "A créer"
# Main
Int_paheko_Action "A créer" "silence"
exit 0

15
bin/lib/config.py Normal file
View File

@@ -0,0 +1,15 @@
DOCKERS_ENV = "/kaz/config/dockers.env"
SECRETS = "/kaz/secret/env-{serv}"
def getDockersConfig(key):
with open(DOCKERS_ENV) as config:
for line in config:
if line.startswith(f"{key}="):
return line.split("=", 1)[1].split("#")[0].strip()
def getSecretConfig(serv, key):
with open(SECRETS.format(serv=serv)) as config:
for line in config:
if line.startswith(f"{key}="):
return line.split("=", 2)[1].split("#")[0].strip()

101
bin/lib/ldap.py Normal file
View File

@@ -0,0 +1,101 @@
import ldap
from passlib.hash import sha512_crypt
from email_validator import validate_email, EmailNotValidError
import subprocess
from .config import getDockersConfig, getSecretConfig
class Ldap:
def __init__(self):
self.ldap_connection = None
self.ldap_root = getDockersConfig("ldap_root")
self.ldap_admin_username = getSecretConfig("ldapServ", "LDAP_ADMIN_USERNAME")
self.ldap_admin_password = getSecretConfig("ldapServ", "LDAP_ADMIN_PASSWORD")
cmd="docker inspect -f '{{.NetworkSettings.Networks.ldapNet.IPAddress}}' ldapServ"
self.ldap_host = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT).strip().decode()
def __enter__(self):
self.ldap_connection = ldap.initialize(f"ldap://{self.ldap_host}")
self.ldap_connection.simple_bind_s("cn={},{}".format(self.ldap_admin_username, self.ldap_root), self.ldap_admin_password)
return self
def __exit__(self, tp, e, traceback):
self.ldap_connection.unbind_s()
def get_email(self, email):
"""
Vérifier si un utilisateur avec cet email existe dans le LDAP soit comme mail principal soit comme alias
"""
# Créer une chaîne de filtre pour rechercher dans les champs "cn" et "mailAlias"
filter_str = "(|(cn={})(mailAlias={}))".format(email, email)
result = self.ldap_connection.search_s("ou=users,{}".format(self.ldap_root), ldap.SCOPE_SUBTREE, filter_str)
return result
def delete_user(self, email):
"""
Supprimer un utilisateur du LDAP par son adresse e-mail
"""
try:
# Recherche de l'utilisateur
result = self.ldap_connection.search_s("ou=users,{}".format(self.ldap_root), ldap.SCOPE_SUBTREE, "(cn={})".format(email))
if not result:
return False # Utilisateur non trouvé
# Récupération du DN de l'utilisateur
dn = result[0][0]
# Suppression de l'utilisateur
self.ldap_connection.delete_s(dn)
return True # Utilisateur supprimé avec succès
except ldap.NO_SUCH_OBJECT:
return False # Utilisateur non trouvé
except ldap.LDAPError as e:
return False # Erreur lors de la suppression
def create_user(self, email, prenom, nom, password, email_secours, quota):
"""
Créer une nouvelle entrée dans le LDAP pour un nouvel utilisateur. QUESTION: A QUOI SERVENT PRENOM/NOM/IDENT_KAZ DANS LE LDAP ? POURQUOI 3 QUOTA ?
"""
password_chiffre = sha512_crypt.hash(password)
if not validate_email(email) or not validate_email(email_secours):
return False
if self.get_email(email):
return False
# Construire le DN
dn = f"cn={email},ou=users,{self.ldap_root}"
mod_attrs = [
('objectClass', [b'inetOrgPerson', b'PostfixBookMailAccount', b'nextcloudAccount', b'kaznaute']),
('sn', f'{prenom} {nom}'.encode('utf-8')),
('mail', email.encode('utf-8')),
('mailEnabled', b'TRUE'),
('mailGidNumber', b'5000'),
('mailHomeDirectory', f"/var/mail/{email.split('@')[1]}/{email.split('@')[0]}/".encode('utf-8')),
('mailQuota', f'{quota}G'.encode('utf-8')),
('mailStorageDirectory', f"maildir:/var/mail/{email.split('@')[1]}/{email.split('@')[0]}/".encode('utf-8')),
('mailUidNumber', b'5000'),
('mailDeSecours', email_secours.encode('utf-8')),
('identifiantKaz', f'{prenom.lower()}.{nom.lower()}'.encode('utf-8')),
('quota', str(quota).encode('utf-8')),
('nextcloudEnabled', b'TRUE'),
('nextcloudQuota', f'{quota} GB'.encode('utf-8')),
('mobilizonEnabled', b'TRUE'),
('agoraEnabled', b'TRUE'),
('userPassword', f'{{CRYPT}}{password_chiffre}'.encode('utf-8')),
('cn', email.encode('utf-8'))
]
self.ldap_connection.add_s(dn, mod_attrs)
return True

134
bin/lib/mattermost.py Normal file
View File

@@ -0,0 +1,134 @@
import subprocess
from .config import getDockersConfig, getSecretConfig
mattermost_user = getSecretConfig("mattermostServ", "MM_ADMIN_USER")
mattermost_pass = getSecretConfig("mattermostServ", "MM_ADMIN_PASSWORD")
mattermost_url = f"https://{getDockersConfig('matterHost')}.{getDockersConfig('domain')}"
mmctl = "docker exec -i mattermostServ bin/mmctl"
class Mattermost:
def __init__(self):
pass
def __enter__(self):
self.authenticate()
return self
def __exit__(self, tp, e, traceback):
self.logout()
def authenticate(self):
# Authentification sur MM
cmd = f"{mmctl} auth login {mattermost_url} --name local-server --username {mattermost_user} --password {mattermost_pass}"
subprocess.run(cmd, shell=True, stderr=subprocess.STDOUT, check=True)
def logout(self):
# Authentification sur MM
cmd = f"{mmctl} auth clean"
subprocess.run(cmd, shell=True, stderr=subprocess.STDOUT, check=True)
def post_message(self, message, equipe="kaz", canal="creation-comptes"):
"""
Envoyer un message dans une Equipe/Canal de MM
"""
cmd = f"{mmctl} post create {equipe}:{canal} --message \"{message}\""
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
return output.decode()
def get_user(self, user):
"""
Le user existe t-il sur MM ?
"""
try:
cmd = f"{mmctl} user search {user} --json"
user_list_output = subprocess.check_output(cmd, shell=True)
return True # Le nom d'utilisateur existe
except subprocess.CalledProcessError:
return False
def create_user(self, user, email, password):
"""
Créer un utilisateur sur MM
"""
cmd = f"{mmctl} user create --email {email} --username {user} --password {password}"
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
return output.decode()
def delete_user(self, email):
"""
Supprimer un utilisateur sur MM
"""
cmd = f"{mmctl} user delete {email} --confirm"
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
return output.decode()
def update_password(self, email, new_password):
"""
Changer un password pour un utilisateur de MM
"""
cmd = f"{mmctl} user change-password {email} --password {new_password}"
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
return output.decode()
def add_user_to_team(self, email, equipe):
"""
Affecte un utilisateur à une équipe MM
"""
cmd = f"{mmctl} team users add {equipe} {email}"
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
return output.decode()
def add_user_to_channel(self, email, equipe, canal):
"""
Affecte un utilisateur à un canal MM
"""
cmd = f'{mmctl} channel users add {equipe}:{canal} {email}'
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
return output.decode()
def get_teams(self):
"""
Lister les équipes sur MM
"""
cmd = f"{mmctl} team list --disable-pager"
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
data_list = output.decode("utf-8").strip().split('\n')
data_list.pop()
return data_list
def create_team(self, equipe, email):
"""
Créer une équipe sur MM et affecter un admin si email est renseigné (set admin marche pô)
"""
#DANGER: l'option --email ne rend pas le user admin de l'équipe comme c'est indiqué dans la doc :(
cmd = f"{mmctl} team create --name {equipe} --display-name {equipe} --private --email {email}"
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
#Workaround: on récup l'id du user et de l'équipe pour affecter le rôle "scheme_admin": true, "scheme_user": true avec l'api MM classique.
#TODO:
return output.decode()
def delete_team(self, equipe):
"""
Supprimer une équipe sur MM
"""
cmd = f"{mmctl} team delete {equipe} --confirm"
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
return output.decode()

134
bin/lib/paheko.py Normal file
View File

@@ -0,0 +1,134 @@
import re
import requests
from .config import getDockersConfig, getSecretConfig
paheko_ident = getDockersConfig("paheko_API_USER")
paheko_pass = getDockersConfig("paheko_API_PASSWORD")
paheko_auth = (paheko_ident, paheko_pass)
paheko_url = f"https://kaz-paheko.{getDockersConfig('domain')}"
class Paheko:
def get_categories(self):
"""
Récupérer les catégories Paheko avec le compteur associé
"""
api_url = paheko_url + '/api/user/categories'
response = requests.get(api_url, auth=paheko_auth)
if response.status_code == 200:
data = response.json()
return data
else:
return None
def get_users_in_categorie(self,categorie):
"""
Afficher les membres d'une catégorie Paheko
"""
if not categorie.isdigit():
return 'Id de category non valide', 400
api_url = paheko_url + '/api/user/category/'+categorie+'.json'
response = requests.get(api_url, auth=paheko_auth)
if response.status_code == 200:
data = response.json()
return data
else:
return None
def get_user(self,ident):
"""
Afficher un membre de Paheko par son email kaz ou son numéro ou le non court de l'orga
"""
emailmatchregexp = re.compile(r"^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+$")
if emailmatchregexp.match(ident):
data = { "sql": f"select * from users where email='{ident}' or alias = '{ident}'" }
api_url = paheko_url + '/api/sql/'
response = requests.post(api_url, auth=paheko_auth, data=data)
#TODO: if faut Rechercher count et vérifier que = 1 et supprimer le count=1 dans la réponse
elif ident.isdigit():
api_url = paheko_url + '/api/user/'+ident
response = requests.get(api_url, auth=paheko_auth)
else:
nomorga = re.sub(r'\W+', '', ident) # on vire les caractères non alphanumérique
data = { "sql": f"select * from users where admin_orga=1 and nom_orga='{nomorga}'" }
api_url = paheko_url + '/api/sql/'
response = requests.post(api_url, auth=paheko_auth, data=data)
#TODO:if faut Rechercher count et vérifier que = 1 et supprimer le count=1 dans la réponse
if response.status_code == 200:
data = response.json()
if data["count"] == 1:
return data["results"][0]
elif data["count"] == 0:
return None
else:
return data["results"]
else:
return None
def set_user(self,ident,field,new_value):
"""
Modifie la valeur d'un champ d'un membre paheko (ident= numéro paheko ou email kaz)
"""
#récupérer le numero paheko si on fournit un email kaz
emailmatchregexp = re.compile(r"^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+$")
if emailmatchregexp.match(ident):
data = { "sql": f"select id from users where email='{ident}'" }
api_url = paheko_url + '/api/sql/'
response = requests.post(api_url, auth=paheko_auth, data=data)
if response.status_code == 200:
#on extrait l'id de la réponse
data = response.json()
if data['count'] == 0:
print("email non trouvé")
return None
elif data['count'] > 1:
print("trop de résultat")
return None
else:
#OK
ident = data['results'][0]['id']
else:
print("pas de résultat")
return None
elif not ident.isdigit():
print("Identifiant utilisateur invalide")
return None
regexp = re.compile("[^a-zA-Z0-9 \\r\\n\\t" + re.escape(string.punctuation) + "]")
valeur = regexp.sub('',new_value) # mouais, il faudrait être beaucoup plus précis ici en fonction des champs qu'on accepte...
champ = re.sub(r'\W+','',field) # pas de caractères non alphanumériques ici, dans l'idéal, c'est à choisir dans une liste plutot
api_url = paheko_url + '/api/user/'+str(ident)
payload = {champ: valeur}
response = requests.post(api_url, auth=paheko_auth, data=payload)
return response.json()
def get_users_with_action(self, action):
"""
retourne tous les membres de paheko avec une action à mener (création du compte kaz / modification...)
"""
api_url = paheko_url + '/api/sql/'
payload = { "sql": f"select * from users where action_auto='{action}'" }
response = requests.post(api_url, auth=paheko_auth, data=payload)
if response.status_code == 200:
return response.json()
else:
return None

40
bin/lib/sympa.py Normal file
View File

@@ -0,0 +1,40 @@
import subprocess
from email_validator import validate_email, EmailNotValidError
from .config import getDockersConfig, getSecretConfig
sympa_user = getSecretConfig("sympaServ", "SOAP_USER")
sympa_pass = getSecretConfig("sympaServ", "SOAP_PASSWORD")
sympa_listmaster = getSecretConfig("sympaServ", "ADMINEMAIL")
sympa_url = f"https://{getDockersConfig('sympaHost')}.{getDockersConfig('domain')}"
sympa_soap = "docker exec -i sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl"
sympa_domain = getDockersConfig('domain_sympa')
sympa_liste_info = "infos"
# docker exec -i sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${sympa_SOAP_USER} --trusted_application_password=${sympa_SOAP_PASSWORD} --proxy_vars=\"USER_EMAIL=${LISTMASTER}\" --service=add --service_parameters=\"${NL_LIST},${EMAIL_SOUHAITE}\"" | tee -a "${CMD_SYMPA}"
class Sympa:
def _execute_sympa_command(self, email, liste, service):
if validate_email(email) and validate_email(liste):
cmd = f'{sympa_soap} --soap_url={sympa_url}/sympasoap --trusted_application={sympa_user} --trusted_application_password={sympa_pass} --proxy_vars=USER_EMAIL={sympa_listmaster} --service={service} --service_parameters="{liste},{email}" && echo $?'
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
return output.decode()
def add_email_to_list(self, email, liste=sympa_liste_info):
"""
Ajouter un email dans une liste sympa
"""
output = self._execute_sympa_command(email, f"{liste}@{sympa_domain}", 'add')
return output
def delete_email_from_list(self, email, liste=sympa_liste_info):
"""
Supprimer un email dans une liste sympa
"""
output = self._execute_sympa_command(email, f"{liste}@{sympa_domain}", 'del')
return output

8
bin/lib/template.py Normal file
View File

@@ -0,0 +1,8 @@
import jinja2
templateLoader = jinja2.FileSystemLoader(searchpath="../templates")
templateEnv = jinja2.Environment(loader=templateLoader)
def render_template(filename, args):
template = templateEnv.get_template(filename)
return template.render(args)

213
bin/lib/user.py Normal file
View File

@@ -0,0 +1,213 @@
from email_validator import validate_email, EmailNotValidError
from glob import glob
import tempfile
import subprocess
import re
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
import smtplib
from .paheko import Paheko
from .ldap import Ldap
from .mattermost import Mattermost
from .sympa import Sympa
from .template import render_template
from .config import getDockersConfig, getSecretConfig
DEFAULT_FILE = "/kaz/tmp/createUser.txt"
webmail_url = f"https://webmail.{getDockersConfig('domain')}"
mattermost_url = f"https://agora.{getDockersConfig('domain')}"
mdp_url = f"https://mdp.{getDockersConfig('domain')}"
sympa_url = f"https://listes.{getDockersConfig('domain')}"
site_url = f"https://{getDockersConfig('domain')}"
cloud_url = f"https://cloud.{getDockersConfig('domain')}"
def _generate_password(self):
cmd="apg -n 1 -m 10 -M NCL -d"
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
new_password="_"+output.decode("utf-8")+"_"
return new_password
def create_user(email, email_secours, admin_orga, nom_orga, quota_disque, nom, prenom, nc_orga, garradin_orga, wp_orga, agora_orga, wiki_orga, nc_base, groupe_nc_base, equipe_agora, password=None):
email = email.lower()
with Ldap() as ldap:
# est-il déjà dans le ldap ? (mail ou alias)
if ldap.get_email(email):
print(f"ERREUR 1: {email} déjà existant dans ldap. on arrête tout")
return None
#test nom orga
if admin_orga == 1:
if nom_orga is None:
print(f"ERREUR 0 sur paheko: {email} : nom_orga vide, on arrête tout")
return
if not bool(re.match(r'^[a-z0-9-]+$', nom_orga)):
print(f"ERREUR 0 sur paheko: {email} : nom_orga ({tab['nom_orga']}) incohérent (minuscule/chiffre/-), on arrête tout")
return
#test email_secours
email_secours = email_secours.lower()
if not validate_email(email_secours):
print("Mauvais email de secours")
return
#test quota
quota = quota_disque
if not quota.isdigit():
print(f"ERREUR 2: quota non numérique : {quota}, on arrête tout")
return
#on génère un password
password = password or _generate_password()
#on créé dans le ldap
#à quoi servent prenom/nom dans le ldap ?
data = {
"prenom": prenom,
"nom": nom,
"password": password,
"email_secours": email_secours,
"quota": quota
}
if not ldap.create_user(email, **data):
print("Erreur LDAP")
return
with Mattermost() as mm:
#on créé dans MM
user = email.split('@')[0]
mm.create_user(user, email, password)
mm.add_user_to_team(email, "kaz")
#et aux 2 canaux de base
mm.add_user_to_channel(email, "kaz", "une-question--un-soucis")
mm.add_user_to_channel(email, "kaz", "cafe-du-commerce--ouvert-2424h")
#on créé une nouvelle équipe ds MM si besoin
if admin_orga == 1:
mm.create_team(nom_orga, email)
#BUG: créer la nouvelle équipe n'a pas rendu l'email admin, on le rajoute comme membre simple
mm.add_user_to_team(email, nom_orga)
#on inscrit email et email_secours à la nl sympa_liste_info
sympa = Sympa()
sympa.add_email_to_list(email)
sympa.add_email_to_list(email_secours)
#on construit/envoie le mail
context = {
'ADMIN_ORGA': admin_orga,
'NOM': f"{prenom} {nom}",
'EMAIL_SOUHAITE': email,
'PASSWORD': password,
'QUOTA': quota_disque,
'URL_WEBMAIL': webmail_url,
'URL_AGORA': mattermost_url,
'URL_MDP': mdp_url,
'URL_LISTE': sympa_url,
'URL_SITE': site_url,
'URL_CLOUD': cloud_url,
}
html = render_template("email_inscription.html", context)
raw = render_template("email_inscription.txt", context)
message = MIMEMultipart()
message["Subject"] = "KAZ: confirmation d'inscription !"
message["From"] = f"contact@{getDockersConfig('domain')}"
message["To"] = f"{email}, {email_secours}"
message.attach(MIMEText(raw, "plain"))
message.attach(MIMEText(html, "html"))
with smtplib.SMTP(f"mail.{getDockersConfig('domain')}", 25) as server:
server.sendmail(f"contact@{getDockersConfig('domain')}", [email,email_secours], message.as_string())
#on met le flag paheko action à Aucune
paheko = Paheko()
try:
paheko.set_user(email, "action_auto", "Aucune")
except:
print(f"Erreur paheko pour remettre action_auto = Aucune pour {email}")
#on post sur MM pour dire ok
with Mattermost() as mm:
msg=f"**POST AUTO** Inscription réussie pour {email} avec le secours {email_secours} Bisou!"
mm.post_message(message=msg)
def create_waiting_users():
"""
Créé les kaznautes en attente: inscription sur MM / Cloud / email + msg sur MM + email à partir de action="a créer" sur paheko
"""
#verrou pour empêcher de lancer en même temps la même api
prefixe="create_user_lock_"
if glob(f"{tempfile.gettempdir()}/{prefixe}*"):
print("Lock présent")
return None
lock_file = tempfile.NamedTemporaryFile(prefix=prefixe,delete=True)
#qui sont les kaznautes à créer ?
paheko = Paheko()
liste_kaznautes = paheko.get_users_with_action("A créer")
if liste_kaznautes:
count=liste_kaznautes['count']
if count==0:
print("aucun nouveau kaznaute à créer")
return
#au moins un kaznaute à créer
for tab in liste_kaznautes['results']:
create_user(**tab)
print("fin des inscriptions")
def create_users_from_file(file=DEFAULT_FILE):
"""
Créé les kaznautes en attente: inscription sur MM / Cloud / email + msg sur MM + email à partir du ficher
"""
#verrou pour empêcher de lancer en même temps la même api
prefixe="create_user_lock_"
if glob(f"{tempfile.gettempdir()}/{prefixe}*"):
print("Lock présent")
return None
lock_file = tempfile.NamedTemporaryFile(prefix=prefixe,delete=True)
#qui sont les kaznautes à créer ?
liste_kaznautes = []
with open(file) as lines:
for line in lines:
line = line.strip()
if not line.startswith("#") and line != "":
user_data = line.split(';')
user_dict = {
"nom": user_data[0],
"prenom": user_data[1],
"email": user_data[2],
"email_secours": user_data[3],
"nom_orga": user_data[4],
"admin_orga": user_data[5],
"nc_orga": user_data[6],
"garradin_orga": user_data[7],
"wp_orga": user_data[8],
"agora_orga": user_data[9],
"wiki_orga": user_data[10],
"nc_base": user_data[11],
"groupe_nc_base": user_data[12],
"equipe_agora": user_data[13],
"quota_disque": user_data[14],
"password": user_data.get(15),
}
liste_kaznautes.append(user_dict)
if liste_kaznautes:
for tab in liste_kaznautes:
create_user(**tab)
print("fin des inscriptions")

View File

@@ -16,7 +16,7 @@ availableOrga=($(getList "${KAZ_CONF_DIR}/container-orga.list"))
AVAILABLE_ORGAS=${availableOrga[*]//-orga/}
# CLOUD
APPLIS_PAR_DEFAUT="tasks calendar contacts bookmarks mail richdocuments external drawio snappymail ransomware_protection" #rainloop richdocumentscode
APPLIS_PAR_DEFAUT="tasks calendar contacts bookmarks mail richdocuments external drawio ransomware_protection" #rainloop richdocumentscode
QUIET="1"
ONNAS=
@@ -120,10 +120,11 @@ firstInstall(){
}
setOfficeUrl(){
OFFICE_URL="https://${officeHost}.${domain}"
if [ ! "${site}" = "prod1" ]; then
OFFICE_URL="https://${site}-${officeHost}.${domain}"
fi
# Did le 25 mars les offices sont tous normalisé sur les serveurs https://${site}-${officeHost}.${domain}
#OFFICE_URL="https://${officeHost}.${domain}"
#if [ ! "${site}" = "prod1" ]; then
OFFICE_URL="https://${site}-${officeHost}.${domain}"
#fi
occCommand "config:app:set --value $OFFICE_URL richdocuments public_wopi_url"
occCommand "config:app:set --value $OFFICE_URL richdocuments wopi_url"
occCommand "config:app:set --value $OFFICE_URL richdocuments disable_certificate_verification"

View File

@@ -143,6 +143,4 @@ for orgaLong in ${Orgas}; do
${SIMU} ssh -p 2201 root@${SITE_DST}.${domain} "${KAZ_BIN_DIR}/manageCloud.sh" --officeURL "${orgaCourt}"
fi
done

View File

@@ -0,0 +1,41 @@
#!/bin/bash
#date: 23/04/2025
#ki: fab
#koi: supprimer de acme.json les certificats LE devenus inutiles
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
. "${DOCKERS_ENV}"
FILE_ACME_ORI="/var/lib/docker/volumes/traefik_letsencrypt/_data/acme.json"
FILE_ACME="/tmp/acme.json"
FILE_URL=$(mktemp)
FILE_ACME_TMP=$(mktemp)
#l'ip du serveur:
#marche po pour les machines hébergée chez T.C... :( on récupère l'IP dans config/dockers.env
#MAIN_IP=$(curl ifconfig.me)
#DANGER: IP depuis config/dockers.env ne fonctionne pas pour les domaines hors *.kaz.bzh (ex:radiokalon.fr)
#sauvegarde
cp $FILE_ACME_ORI $FILE_ACME
cp $FILE_ACME "$FILE_ACME"_$(date +%Y%m%d_%H%M%S)
#je cherche toutes les url
jq -r '.letsencrypt.Certificates[].domain.main' $FILE_ACME > $FILE_URL
while read -r url; do
#echo "Traitement de : $url"
nb=$(dig $url | grep $MAIN_IP | wc -l)
if [ "$nb" -eq 0 ]; then
#absent, on vire de acme.json
echo "on supprime "$url
jq --arg url "$url" 'del(.letsencrypt.Certificates[] | select(.domain.main == $url))' $FILE_ACME > $FILE_ACME_TMP
mv -f $FILE_ACME_TMP $FILE_ACME
fi
done < "$FILE_URL"
echo "si satisfait, remettre "$FILE_ACME" dans "$FILE_ACME_ORI

View File

@@ -1,7 +1,6 @@
#!/bin/bash
# --------------------------------------------------------------------------------------
# Didier
#
# Script de sauvegarde avec BorgBackup
# la commande de creation du dépot est : borg init --encryption=repokey /mnt/backup-nas1/BorgRepo
# la conf de borg est dans /root/.config/borg
@@ -20,7 +19,7 @@ setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
VERSION="V-3-11-2024"
VERSION="V-10-03-2025"
PRG=$(basename $0)
RACINE=$(echo $PRG | awk '{print $1}')
#IFS=' '
@@ -72,20 +71,10 @@ LogFic() {
}
#
ExpMail() {
MAIL_SOURCE=$1
MAIL_DEST=$1
MAIL_SUJET=$2
MAIL_DEST=$3
MAIL_TEXTE=$4
# a mettre ailleurs
mailexp=${borg_MAILEXP}
mailpassword=${borg_MAILPASSWORD}
mailserveur=${borg_MAILSERVEUR}
#
#sendemail -t ${MAIL_DEST} -u ${MAIL_SUJET} -m ${MAIL_TEXTE} -f $mailexp -s $mailserveur:587 -xu $mailexp -xp $mailpassword -o tls=yes >/dev/null 2>&1
MAIL_TEXTE=$3
printf "Subject:${MAIL_SUJET}\n${MAIL_TEXTE}" | msmtp ${MAIL_DEST}
#docker exec -i mailServ mailx -a 'Content-Type: text/plain; charset="UTF-8"' -r ${MAIL_SOURCE} -s "${MAIL_SUJET}" ${MAIL_DEST} << EOF
#${MAIL_TEXTE}
#EOF
}
Pre_Sauvegarde() {
@@ -297,7 +286,7 @@ if [ "${REPO_MOUNT_ACTIVE}" = "true" ]
then
echo "le REPO : ${BORG_REPO} est monté , je sors"
LogFic "le REPO : ${BORG_REPO} est monté , je sors"
ExpMail borg@${domain} "${site} : Sauvegarde en erreur" ${MAIL_RAPPORT} "le REPO : ${BORG_REPO} est monté, sauvegarde impossible"
ExpMail ${MAIL_RAPPORT} "${site} : Sauvegarde en erreur" "le REPO : ${BORG_REPO} est monté, sauvegarde impossible"
exit 1
fi
@@ -349,7 +338,7 @@ BorgBackup
"
LogFic " - la sauvegarde est OK"
[ "$MAILOK" = true ] && ExpMail borg@${domain} "${site} : Sauvegarde Ok" ${MAIL_RAPPORT} ${MESS_SAUVE_OK}${LOGDATA}
[ "$MAILOK" = true ] && ExpMail ${MAIL_RAPPORT} "${site} : Sauvegarde Ok" ${MESS_SAUVE_OK}${LOGDATA}
IFS=' '
;;
'1' )
@@ -365,7 +354,7 @@ BorgBackup
"
LogFic " - Sauvegarde en Warning: ${BACKUP_EXIT}"
[ "$MAILWARNING" = true ] && ExpMail borg@${domain} "${site} : Sauvegarde en Warning: ${BACKUP_EXIT}" ${MAIL_RAPPORT} ${MESS_SAUVE_ERR}${LOGDATA}
[ "$MAILWARNING" = true ] && ExpMail ${MAIL_RAPPORT} "${site} : Sauvegarde en Warning: ${BACKUP_EXIT}" ${MESS_SAUVE_ERR}${LOGDATA}
IFS=' '
;;
* )
@@ -381,7 +370,7 @@ BorgBackup
"
LogFic " - !!!!! Sauvegarde en Erreur !!!!! : ${BACKUP_EXIT}"
ExpMail borg@${domain} "${site} : Sauvegarde en Erreur !!!! : ${BACKUP_EXIT}" ${MAIL_RAPPORT} ${MESS_SAUVE_ERR}${LOGDATA}
ExpMail ${MAIL_RAPPORT} "${site} : Sauvegarde en Erreur !!!! : ${BACKUP_EXIT}" ${MESS_SAUVE_ERR}${LOGDATA}
IFS=' '
;;
esac

View File

@@ -30,12 +30,12 @@ while read line ; do
sed "s%\(.*\)--clean_val--\(.*\)%\1${JIRAFEAU_DIR}\2%" <<< ${line}
continue
;;
*DATABASE*)
*DATABASE*|*DB_NAME*)
dbName="$(sed "s/\([^_]*\)_.*/\1/" <<< ${line})_$(apg -n 1 -m 2 -M NCL | cut -c 1-2)"
sed "s/\(.*\)--clean_val--\(.*\)/\1${dbName}\2/" <<< ${line}
continue
;;
*ROOT_PASSWORD*|*PASSWORD*)
*ROOT_PASSWORD*|*PASSWORD*|*SECRET*)
pass="$(apg -n 1 -m 16 -M NCL)"
sed "s/\(.*\)--clean_val--\(.*\)/\1${pass}\2/" <<< ${line}
continue

82
bin/templates/email.css Normal file
View File

@@ -0,0 +1,82 @@
body {
font-family: Arial, sans-serif;
background-color: #f4f4f4;
margin: 0;
padding: 0;
}
.email-content {
background-color: #f0f0f0; /* Light gray background */
margin: 20px auto;
padding: 20px;
border: 1px solid #dddddd;
max-width: 600px;
width: 90%; /* This makes the content take 90% width of its container */
text-align: left; /* Remove text justification */
}
header {
background-color: #E16969;
color: white;
text-align: center;
height: 50px; /* Fixed height for header */
line-height: 50px; /* Vertically center the text */
width: 100%; /* Make header full width */
}
footer {
background-color: #E16969;
color: white;
text-align: center;
height: 50px; /* Fixed height for footer */
line-height: 50px; /* Vertically center the text */
width: 100%; /* Make footer full width */
}
.header-container {
position: relative; /* Pour positionner le logo et le texte dans le header */
height: 50px; /* Hauteur maximale du header */
}
.logo {
position: absolute; /* Pour positionner le logo */
max-height: 100%; /* Taille maximale du logo égale à la hauteur du header */
top: 0; /* Aligner le logo en haut */
left: 0; /* Aligner le logo à gauche */
margin-right: 10px; /* Marge à droite du logo */
}
.header-container h1, .footer-container p {
margin: 0;
font-size: 24px;
}
.footer-container p {
font-size: 12px;
}
.footer-container a {
color: #FFFFFF; /* White color for links in footer */
text-decoration: none;
}
.footer-container a:hover {
text-decoration: underline; /* Optional: add underline on hover */
}
a {
color: #E16969; /* Same color as header/footer background for all other links */
text-decoration: none;
}
a:hover {
text-decoration: underline; /* Optional: add underline on hover */
}
h2 {
color: #E16969;
}
p {
line-height: 1.6;
}

View File

@@ -0,0 +1,9 @@
<footer>
<div class="footer-container">
<p>
Ici, on prend soin de vos données et on ne les vend pas !
<br>
<a href="https://kaz.bzh">https://kaz.bzh</a>
</p>
</div>
</footer>

View File

@@ -0,0 +1,6 @@
<header>
<div class="header-container">
<img class="logo" src="https://kaz-cloud.kaz.bzh/apps/theming/image/logo?v=33" alt="KAZ Logo">
<h1>Kaz : Le numérique sobre, libre, éthique et local</h1>
</div>
</header>

View File

@@ -0,0 +1,94 @@
<!DOCTYPE html>
<html lang="fr">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Email d'inscription'</title>
<style>
{% include 'email.css' %}
</style>
</head>
<body>
{% include 'email_header.html' %}
<div class="email-content">
<p>
Bonjour {{NOM}}!<br><br>
Bienvenue chez KAZ!<br><br>
Vous disposez de :
<ul>
<li>une messagerie classique : <a href={{URL_WEBMAIL}}>{{URL_WEBMAIL}}</a></li>
<li>une messagerie instantanée pour discuter au sein d'équipes : <a href={{URL_AGORA}}>{{URL_AGORA}}</a></li>
</ul>
Votre email et identifiant pour ces services : {{EMAIL_SOUHAITE}}<br>
Le mot de passe : <b>{{PASSWORD}}</b><br><br>
Pour changer votre mot de passe de messagerie, c'est ici: <a href={{URL_MDP}}>{{URL_MDP}}</a><br>
Si vous avez perdu votre mot de passe, c'est ici: <a href={{URL_MDP}}/?action=sendtoken>{{URL_MDP}}/?action=sendtoken</a><br><br>
Vous pouvez accéder à votre messagerie classique:
<ul>
<li>soit depuis votre webmail : <a href={{URL_WEBMAIL}}>{{URL_WEBMAIL}}</a></li>
<li>soit depuis votre bureau virtuel : <a href={{URL_CLOUD}}>{{URL_CLOUD}}</a></li>
<li>soit depuis un client de messagerie comme thunderbird<br>
</ul>
</p>
{% if ADMIN_ORGA == '1' %}
<p>
En tant qu'association/famille/société. Vous avez la possibilité d'ouvrir, quand vous le voulez, des services kaz, il vous suffit de nous le demander.<br><br>
Pourquoi n'ouvrons-nous pas tous les services tout de suite ? parce que nous aimons la sobriété et que nous préservons notre espace disque ;)<br>
A quoi sert d'avoir un site web si on ne l'utilise pas, n'est-ce pas ?<br><br>
Par retour de mail, dites-nous de quoi vous avez besoin tout de suite entre:
<ul>
<li>une comptabilité : un service de gestion adhérents/clients</li>
<li>un site web de type WordPress</li>
<li>un cloud : bureau virtuel pour stocker des fichiers/calendriers/contacts et partager avec vos connaissances</li>
</ul>
Une fois que vous aurez répondu à ce mail, votre demande sera traitée manuellement.
</p>
{% endif %}
<p>
Vous avez quelques docs intéressantes sur le wiki de kaz:
<ul>
<li>Migrer son site internet wordpress vers kaz : <a href="https://wiki.kaz.bzh/wordpress/start#migrer_son_site_wordpress_vers_kaz">https://wiki.kaz.bzh/wordpress/start#migrer_son_site_wordpress_vers_kaz</a></li>
<li>Migrer sa messagerie vers kaz : <a href="https://wiki.kaz.bzh/messagerie/gmail/start">https://wiki.kaz.bzh/messagerie/gmail/start</a></li>
<li>Démarrer simplement avec son cloud : <a href="https://wiki.kaz.bzh/nextcloud/start">https://wiki.kaz.bzh/messagerie/gmail/start</a></li>
</ul>
Votre quota est de {{QUOTA}}GB. Si vous souhaitez plus de place pour vos fichiers ou la messagerie, faites-nous signe !<br><br>
Pour accéder à la messagerie instantanée et communiquer avec les membres de votre équipe ou ceux de kaz : <a href={{URL_AGORA}}/login>{{URL_AGORA}}/login</a><br>
</p>
{% if ADMIN_ORGA == '1' %}
<p>
Comme administrateur de votre organisation, vous pouvez créer des listes de diffusion en vous rendant sur <a href={{URL_LISTE}}>{{URL_LISTE}}</a><br>
</p>
{% endif %}
<p>
Enfin, vous disposez de tous les autres services KAZ où l'authentification n'est pas nécessaire : <a href={{URL_SITE}}>{{URL_SITE}}</a><br><br>
En cas de soucis, n'hésitez pas à poser vos questions sur le canal 'Une question ? un soucis' de l'agora dispo ici : <a href={{URL_AGORA}}>{{URL_AGORA}}</a><br><br>
Si vous avez besoin d'accompagnement pour votre site, votre cloud, votre compta, votre migration de messagerie,...<br>nous proposons des formations mensuelles gratuites. Si vous souhaitez être accompagné par un professionnel, nous pouvons vous donner une liste de pros, référencés par KAZ.<br><br>
À bientôt 😉<br><br>
La collégiale de KAZ.<br>
</p>
</div> <!-- <div class="email-content"> -->
{% include 'email_footer.html' %}
</body>
</html>

View File

@@ -0,0 +1,70 @@
Bonjour {{NOM}}!
Bienvenue chez KAZ!<br><br>
Vous disposez de :
<ul>
<li>une messagerie classique : <a href={{URL_WEBMAIL}}>{{URL_WEBMAIL}}</a></li>
<li>une messagerie instantanée pour discuter au sein d'équipes : <a href={{URL_AGORA}}>{{URL_AGORA}}</a></li>
</ul>
Votre email et identifiant pour ces services : {{EMAIL_SOUHAITE}}<br>
Le mot de passe : <b>{{PASSWORD}}</b><br><br>
Pour changer votre mot de passe de messagerie, c'est ici: <a href={{URL_MDP}}>{{URL_MDP}}</a><br>
Si vous avez perdu votre mot de passe, c'est ici: <a href={{URL_MDP}}/?action=sendtoken>{{URL_MDP}}/?action=sendtoken</a><br><br>
Vous pouvez accéder à votre messagerie classique:
<ul>
<li>soit depuis votre webmail : <a href={{URL_WEBMAIL}}>{{URL_WEBMAIL}}</a></li>
<li>soit depuis votre bureau virtuel : <a href={{URL_CLOUD}}>{{URL_CLOUD}}</a></li>
<li>soit depuis un client de messagerie comme thunderbird<br>
</ul>
</p>
{% if ADMIN_ORGA == '1' %}
<p>
En tant qu'association/famille/société. Vous avez la possibilité d'ouvrir, quand vous le voulez, des services kaz, il vous suffit de nous le demander.<br><br>
Pourquoi n'ouvrons-nous pas tous les services tout de suite ? parce que nous aimons la sobriété et que nous préservons notre espace disque ;)<br>
A quoi sert d'avoir un site web si on ne l'utilise pas, n'est-ce pas ?<br><br>
Par retour de mail, dites-nous de quoi vous avez besoin tout de suite entre:
<ul>
<li>une comptabilité : un service de gestion adhérents/clients</li>
<li>un site web de type WordPress</li>
<li>un cloud : bureau virtuel pour stocker des fichiers/calendriers/contacts et partager avec vos connaissances</li>
</ul>
Une fois que vous aurez répondu à ce mail, votre demande sera traitée manuellement.
</p>
{% endif %}
<p>
Vous avez quelques docs intéressantes sur le wiki de kaz:
<ul>
<li>Migrer son site internet wordpress vers kaz : <a href="https://wiki.kaz.bzh/wordpress/start#migrer_son_site_wordpress_vers_kaz">https://wiki.kaz.bzh/wordpress/start#migrer_son_site_wordpress_vers_kaz</a></li>
<li>Migrer sa messagerie vers kaz : <a href="https://wiki.kaz.bzh/messagerie/gmail/start">https://wiki.kaz.bzh/messagerie/gmail/start</a></li>
<li>Démarrer simplement avec son cloud : <a href="https://wiki.kaz.bzh/nextcloud/start">https://wiki.kaz.bzh/messagerie/gmail/start</a></li>
</ul>
Votre quota est de {{QUOTA}}GB. Si vous souhaitez plus de place pour vos fichiers ou la messagerie, faites-nous signe !<br><br>
Pour accéder à la messagerie instantanée et communiquer avec les membres de votre équipe ou ceux de kaz : <a href={{URL_AGORA}}/login>{{URL_AGORA}}/login</a><br>
</p>
{% if ADMIN_ORGA == '1' %}
<p>
Comme administrateur de votre organisation, vous pouvez créer des listes de diffusion en vous rendant sur <a href={{URL_LISTE}}>{{URL_LISTE}}</a><br>
</p>
{% endif %}
<p>
Enfin, vous disposez de tous les autres services KAZ où l'authentification n'est pas nécessaire : <a href={{URL_SITE}}>{{URL_SITE}}</a><br><br>
En cas de soucis, n'hésitez pas à poser vos questions sur le canal 'Une question ? un soucis' de l'agora dispo ici : <a href={{URL_AGORA}}>{{URL_AGORA}}</a><br><br>
Si vous avez besoin d'accompagnement pour votre site, votre cloud, votre compta, votre migration de messagerie,...<br>nous proposons des formations mensuelles gratuites. Si vous souhaitez être accompagné par un professionnel, nous pouvons vous donner une liste de pros, référencés par KAZ.<br><br>
À bientôt 😉<br><br>
La collégiale de KAZ.<br>

View File

@@ -84,7 +84,6 @@ jirafeauUpdate(){
updateEnvDB "etherpad" "${KAZ_KEY_DIR}/env-${etherpadDBName}" "${etherpadDBName}"
updateEnvDB "framadate" "${KAZ_KEY_DIR}/env-${framadateDBName}" "${framadateDBName}"
updateEnvDB "gitea" "${KAZ_KEY_DIR}/env-${gitDBName}" "${gitDBName}"
updateEnvDB "mattermost" "${KAZ_KEY_DIR}/env-${mattermostDBName}" "${mattermostDBName}"
updateEnvDB "nextcloud" "${KAZ_KEY_DIR}/env-${nextcloudDBName}" "${nextcloudDBName}"
updateEnvDB "roundcube" "${KAZ_KEY_DIR}/env-${roundcubeDBName}" "${roundcubeDBName}"
updateEnvDB "sympa" "${KAZ_KEY_DIR}/env-${sympaDBName}" "${sympaDBName}"
@@ -92,6 +91,8 @@ updateEnvDB "vigilo" "${KAZ_KEY_DIR}/env-${vigiloDBName}" "${vigiloDBName}"
updateEnvDB "wp" "${KAZ_KEY_DIR}/env-${wordpressDBName}" "${wordpressDBName}"
updateEnvDB "vaultwarden" "${KAZ_KEY_DIR}/env-${vaultwardenDBName}" "${vaultwardenDBName}"
updateEnvDB "castopod" "${KAZ_KEY_DIR}/env-${castopodDBName}" "${castopodDBName}"
updateEnvDB "spip" "${KAZ_KEY_DIR}/env-${spipDBName}" "${spipDBName}"
updateEnvDB "mastodon" "${KAZ_KEY_DIR}/env-${mastodonDBName}" "${mastodonDBName}"
updateEnv "apikaz" "${KAZ_KEY_DIR}/env-${apikazServName}"
updateEnv "ethercalc" "${KAZ_KEY_DIR}/env-${ethercalcServName}"
@@ -101,6 +102,7 @@ updateEnv "gandi" "${KAZ_KEY_DIR}/env-gandi"
updateEnv "gitea" "${KAZ_KEY_DIR}/env-${gitServName}"
updateEnv "jirafeau" "${KAZ_KEY_DIR}/env-${jirafeauServName}"
updateEnv "mattermost" "${KAZ_KEY_DIR}/env-${mattermostServName}"
updateEnv "mattermost" "${KAZ_KEY_DIR}/env-${mattermostDBName}"
updateEnv "nextcloud" "${KAZ_KEY_DIR}/env-${nextcloudServName}"
updateEnv "office" "${KAZ_KEY_DIR}/env-${officeServName}"
updateEnv "roundcube" "${KAZ_KEY_DIR}/env-${roundcubeServName}"
@@ -113,7 +115,11 @@ updateEnv "mobilizon" "${KAZ_KEY_DIR}/env-${mobilizonServName}"
updateEnv "mobilizon" "${KAZ_KEY_DIR}/env-${mobilizonDBName}"
updateEnv "vaultwarden" "${KAZ_KEY_DIR}/env-${vaultwardenServName}"
updateEnv "castopod" "${KAZ_KEY_DIR}/env-${castopodServName}"
updateEnv "spip" "${KAZ_KEY_DIR}/env-${spipServName}"
updateEnv "ldap" "${KAZ_KEY_DIR}/env-${ldapUIName}"
updateEnv "peertube" "${KAZ_KEY_DIR}/env-${peertubeServName}"
updateEnv "peertube" "${KAZ_KEY_DIR}/env-${peertubeDBName}" "${peertubeDBName}"
updateEnv "mastodon" "${KAZ_KEY_DIR}/env-${mastodonServName}"
framadateUpdate

View File

@@ -1,2 +1,2 @@
proxy
#traefik
# proxy
traefik

View File

@@ -4,7 +4,7 @@ dokuwiki
paheko
gitea
jirafeau
mattermost
#mattermost
roundcube
mobilizon
vaultwarden

View File

@@ -4,3 +4,4 @@ collabora
etherpad
web
imapsync
spip

View File

@@ -93,13 +93,15 @@ vaultwardenHost=koffre
traefikHost=dashboard
imapsyncHost=imapsync
castopodHost=pod
spipHost=spip
mastodonHost=masto
apikazHost=apikaz
snappymailHost=snappymail
########################################
# ports internes
matterPort=8000
matterPort=8065
imapsyncPort=8080
apikaz=5000
@@ -147,6 +149,10 @@ ldapUIName=ldapUI
imapsyncServName=imapsyncServ
castopodDBName=castopodDB
castopodServName=castopodServ
mastodonServName=mastodonServ
spipDBName=spipDB
spipServName=spipServ
mastodonDBName=mastodonDB
apikazServName=apikazServ
########################################

View File

@@ -13,6 +13,8 @@ services:
- orgaDB:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
environment:
- MARIADB_AUTO_UPGRADE=1
env_file:
- ../../secret/env-${nextcloudDBName}
# - ../../secret/env-${mattermostDBName}
@@ -214,6 +216,31 @@ services:
- ../../secret/env-${castopodServName}
command: --requirepass ${castopodRedisPassword}
#}}
#{{spip
spip:
image: ipeos/spip:4.4
restart: ${restartPolicy}
depends_on:
- db
links:
- db
env_file:
- ../../secret/env-${spipServName}
environment:
- SPIP_AUTO_INSTALL=1
- SPIP_DB_HOST=db
- SPIP_SITE_ADDRESS=https://${orga}${spipHost}.${domain}
expose:
- 80
labels:
- "traefik.enable=true"
- "traefik.http.routers.${orga}${spipServName}.rule=Host(`${orga}${spipHost}.${domain}`){{FOREIGN_SPIP}}"
networks:
- orgaNet
volumes:
- spip:/usr/src/spip
#}}
@@ -296,8 +323,13 @@ volumes:
castopodCache:
external: true
name: orga_${orga}castopodCache
#}}
#{{spip
spip:
external: true
name: orga_${orga}spip
#}}
networks:

View File

@@ -68,6 +68,18 @@ GRANT ALL ON ${castopod_MYSQL_DATABASE}.* TO '${castopod_MYSQL_USER}'@'%' IDENTI
FLUSH PRIVILEGES;"
;;
'spip' )
SQL="$SQL
CREATE DATABASE IF NOT EXISTS ${spip_MYSQL_DATABASE};
DROP USER IF EXISTS '${spip_MYSQL_USER}';
CREATE USER '${spip_MYSQL_USER}'@'%';
GRANT ALL ON ${spip_MYSQL_DATABASE}.* TO '${spip_MYSQL_USER}'@'%' IDENTIFIED BY '${spip_MYSQL_PASSWORD}';
FLUSH PRIVILEGES;"
;;
esac
done

View File

@@ -37,3 +37,7 @@ docker volume create --name=orga_${orga}wordpress
docker volume create --name=orga_${orga}castopodCache
docker volume create --name=orga_${orga}castopodMedia
#}}
#{{spip
docker volume create --name=orga_${orga}spip
#}}

View File

@@ -20,7 +20,7 @@ STAGE_CREATE=
STAGE_INIT=
usage(){
echo "Usage: $0 [-h] [-l] [+/-paheko] [-/+cloud [-/+collabora}]] [+/-agora] [+/-wiki] [+/-wp] [+/-pod] [x{G/M/k}] OrgaName"
echo "Usage: $0 [-h] [-l] [+/-paheko] [-/+cloud [-/+collabora}]] [+/-agora] [+/-wiki] [+/-wp] [+/-pod] [+/-spip] [x{G/M/k}] OrgaName"
echo " -h|--help : this help"
echo " -l|--list : list service"
@@ -34,6 +34,7 @@ usage(){
echo " +/- wiki : on/off wiki"
echo " +/- wp|word* : on/off wp"
echo " +/- casto*|pod : on/off castopod"
echo " +/- spip : on/off spip"
echo " x[GMk] : set quota"
echo " OrgaName : name must contain a-z0-9_\-"
}
@@ -141,6 +142,7 @@ export agora=$(flagInCompose docker-compose.yml agora: off)
export wiki=$(flagInCompose docker-compose.yml dokuwiki: off)
export wp=$(flagInCompose docker-compose.yml wordpress: off)
export castopod=$(flagInCompose docker-compose.yml castopod: off)
export spip=$(flagInCompose docker-compose.yml spip: off)
export db="off"
export services="off"
export paheko=$([[ -f usePaheko ]] && echo "on" || echo "off")
@@ -159,7 +161,7 @@ INITCMD2="--install"
for ARG in "$@"; do
case "${ARG}" in
'-show' )
for i in cloud collabora agora wiki wp castopod db; do
for i in cloud collabora agora wiki wp castopod spip db; do
echo "${i}=${!i}"
done
exit;;
@@ -225,6 +227,11 @@ for ARG in "$@"; do
DBaInitialiser="$DBaInitialiser castopod"
INITCMD2="$INITCMD2 -pod"
;;
'+spip' )
spip="on"
DBaInitialiser="$DBaInitialiser spip"
;;
[.0-9]*[GMk] )
quota="${ARG}"
;;
@@ -304,6 +311,13 @@ if [[ "${castopod}" = "on" ]]; then
else
DEL_DOMAIN+="${ORGA}-${castopodHost} "
fi
if [[ "${spip}" = "on" ]]; then
DOMAIN_AREA+=" - ${ORGA}-\${spipServName}:${ORGA}-\${spipHost}.\${domain}\n"
ADD_DOMAIN+="${ORGA}-${spipHost} "
else
DEL_DOMAIN+="${ORGA}-${spipHost} "
fi
DOMAIN_AREA+="}}\n"
if [[ -n "${STAGE_DEFAULT}${STAGE_CREATE}" ]]; then
@@ -358,6 +372,9 @@ update() {
sed "s/\([^ ]*\) ${ORGA};/ \|\| Host(\`\1\`)/" | tr -d "\r\n")
FOREIGN_POD=$(grep " ${ORGA};" "${KAZ_CONF_PROXY_DIR}/pod_kaz_map" 2>/dev/null | \
sed "s/\([^ ]*\) ${ORGA};/ \|\| Host(\`\1\`)/" | tr -d "\r\n")
FOREIGN_SPIP=$(grep " ${ORGA};" "${KAZ_CONF_PROXY_DIR}/spip_kaz_map" 2>/dev/null | \
sed "s/\([^ ]*\) ${ORGA};/ \|\| Host(\`\1\`)/" | tr -d "\r\n")
awk '
BEGIN {cp=1}
/#}}/ {cp=1 ; next};
@@ -371,6 +388,7 @@ update() {
-e "s/{{FOREIGN_NC}}/${FOREIGN_NC}/"\
-e "s/{{FOREIGN_DW}}/${FOREIGN_DW}/"\
-e "s/{{FOREIGN_POD}}/${FOREIGN_POD}/"\
-e "s/{{FOREIGN_SPIP}}/${FOREIGN_SPIP}/"\
-e "s|\${orga}|${ORGA}-|g"
) > "$2"
sed "s/storage_opt:.*/storage_opt: ${quota}/g" -i "$2"

View File

@@ -0,0 +1,42 @@
services:
cadvisor:
image: gcr.io/cadvisor/cadvisor:v0.52.0
container_name: cadvisor
command:
- "--store_container_labels=false"
- "--whitelisted_container_labels=com.docker.compose.project"
- "--housekeeping_interval=60s"
- "--docker_only=true"
- "--disable_metrics=percpu,sched,tcp,udp,disk,diskIO,hugetlb,referenced_memory,cpu_topology,resctrl"
networks:
- traefikNet
labels:
- "traefik.enable=true"
- "traefik.http.routers.cadvisor-secure.entrypoints=websecure"
- "traefik.http.routers.cadvisor-secure.rule=Host(`cadvisor-${site}.${domain}`)"
#- "traefik.http.routers.grafana-secure.tls=true"
- "traefik.http.routers.cadvisor-secure.service=cadvisor"
- "traefik.http.routers.cadvisor-secure.middlewares=test-adminipallowlist@file"
- "traefik.http.services.cadvisor.loadbalancer.server.port=8080"
- "traefik.docker.network=traefikNet"
# ports:
# - 8098:8080
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
- /dev/disk/:/dev/disk:ro
devices:
- /dev/kmsg
privileged: true
restart: unless-stopped
networks:
traefikNet:
external: true
name: traefikNet

View File

@@ -27,11 +27,13 @@ services:
- "traefik.docker.network=giteaNet"
db:
image: mariadb:10.5
image: mariadb
container_name: ${gitDBName}
restart: ${restartPolicy}
env_file:
- ../../secret/env-${gitDBName}
environment:
- MARIADB_AUTO_UPGRADE=1
volumes:
- gitDB:/var/lib/mysql
- /etc/timezone:/etc/timezone:ro

View File

@@ -1,7 +1,7 @@
services:
prometheus:
image: prom/prometheus:v2.15.2
image: prom/prometheus:v3.3.0
restart: unless-stopped
container_name: ${prometheusServName}
volumes:
@@ -10,27 +10,27 @@ services:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
command:
- "--web.route-prefix=/"
- "--web.external-url=https://${site}.${domain}/prometheus"
# - "--web.route-prefix=/"
# - "--web.external-url=https://prometheus.${domain}"
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--web.console.libraries=/usr/share/prometheus/console_libraries"
- "--web.console.templates=/usr/share/prometheus/consoles"
networks:
- traefikNet
labels:
- "traefik.enable=true"
- "traefik.http.routers.prometheus-secure.entrypoints=websecure"
- "traefik.http.middlewares.prometheus-stripprefix.stripprefix.prefixes=/prometheus"
- "traefik.http.routers.prometheus-secure.rule=Host(`${site}.${domain}`) && PathPrefix(`/prometheus`)"
# - "traefik.http.routers.prometheus-secure.tls=true"
- "traefik.http.routers.prometheus-secure.middlewares=prometheus-stripprefix,test-adminiallowlist@file,traefik-auth"
- "traefik.http.routers.prometheus-secure.service=prometheus"
- "traefik.http.services.prometheus.loadbalancer.server.port=9090"
- "traefik.docker.network=traefikNet"
# labels:
# - "traefik.enable=true"
# - "traefik.http.routers.prometheus-secure.entrypoints=websecure"
# - "traefik.http.middlewares.prometheus-stripprefix.stripprefix.prefixes=/prometheus"
# - "traefik.http.routers.prometheus-secure.rule=Host(`prometheus.${domain}`)"
# # - "traefik.http.routers.prometheus-secure.tls=true"
# - "traefik.http.routers.prometheus-secure.middlewares=prometheus-stripprefix,test-adminiallowlist@file,traefik-auth"
# - "traefik.http.routers.prometheus-secure.service=prometheus"
# - "traefik.http.services.prometheus.loadbalancer.server.port=9090"
# - "traefik.docker.network=traefikNet"
grafana:
image: grafana/grafana:6.6.1
image: grafana/grafana:11.6.0
restart: unless-stopped
container_name: ${grafanaServName}
volumes:
@@ -48,8 +48,8 @@ services:
- "traefik.enable=true"
- "traefik.http.routers.grafana-secure.entrypoints=websecure"
- "traefik.http.middlewares.grafana-stripprefix.stripprefix.prefixes=/grafana"
- "traefik.http.routers.grafana-secure.rule=Host(`${site}.${domain}`) && PathPrefix(`/grafana`)"
# - "traefik.http.routers.grafana-secure.tls=true"
- "traefik.http.routers.grafana-secure.rule=Host(`grafana.${domain}`)"
#- "traefik.http.routers.grafana-secure.tls=true"
- "traefik.http.routers.grafana-secure.service=grafana"
- "traefik.http.routers.grafana-secure.middlewares=grafana-stripprefix,test-adminipallowlist@file,traefik-auth"
- "traefik.http.services.grafana.loadbalancer.server.port=3000"

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,874 @@
{
"__inputs": [
{
"name": "DS_PROMETHEUS",
"label": "Prometheus",
"description": "",
"type": "datasource",
"pluginId": "prometheus",
"pluginName": "Prometheus"
}
],
"__elements": {},
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "11.6.0"
},
{
"type": "datasource",
"id": "prometheus",
"name": "Prometheus",
"version": "1.0.0"
},
{
"type": "panel",
"id": "stat",
"name": "Stat",
"version": ""
},
{
"type": "panel",
"id": "timeseries",
"name": "Time series",
"version": ""
}
],
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"description": "Docker monitoring with Prometheus and cAdvisor",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [
{
"asDropdown": false,
"icon": "external link",
"includeVars": false,
"keepTime": false,
"tags": [],
"targetBlank": true,
"title": "Portainer",
"tooltip": "",
"type": "link",
"url": "https://portainer.kaz.bzh/"
}
],
"panels": [
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 0
},
"id": 8,
"panels": [],
"repeat": "host",
"title": "$host",
"type": "row"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"mappings": [
{
"options": {
"match": "null",
"result": {
"text": "N/A"
}
},
"type": "special"
}
],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "none"
},
"overrides": []
},
"gridPos": {
"h": 3,
"w": 8,
"x": 0,
"y": 1
},
"id": 7,
"maxDataPoints": 100,
"options": {
"colorMode": "none",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "horizontal",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "11.6.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "count(container_last_seen{image!=\"\", host=\"$host\"})",
"intervalFactor": 2,
"legendFormat": "",
"metric": "container_last_seen",
"range": true,
"refId": "A",
"step": 240
}
],
"title": "Running containers",
"transparent": true,
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"mappings": [
{
"options": {
"match": "null",
"result": {
"text": "N/A"
}
},
"type": "special"
}
],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "mbytes"
},
"overrides": []
},
"gridPos": {
"h": 3,
"w": 8,
"x": 8,
"y": 1
},
"id": 5,
"maxDataPoints": 100,
"options": {
"colorMode": "none",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "horizontal",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "11.6.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "sum(container_memory_usage_bytes{image!=\"\", host=\"$host\"})/1024/1024",
"intervalFactor": 2,
"legendFormat": "",
"metric": "container_memory_usage_bytes",
"range": true,
"refId": "A",
"step": 240
}
],
"title": "Total Memory Usage",
"transparent": true,
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"mappings": [
{
"options": {
"match": "null",
"result": {
"text": "N/A"
}
},
"type": "special"
}
],
"max": 100,
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "percent"
},
"overrides": []
},
"gridPos": {
"h": 3,
"w": 8,
"x": 16,
"y": 1
},
"id": 6,
"maxDataPoints": 100,
"options": {
"colorMode": "none",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "horizontal",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "11.6.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "sum(rate(container_cpu_user_seconds_total{image!=\"\", host=\"$host\"}[5m]) * 100)",
"intervalFactor": 2,
"legendFormat": "",
"metric": "container_memory_usage_bytes",
"range": true,
"refId": "A",
"step": 240
}
],
"title": "Total CPU Usage",
"transparent": true,
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 2,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": true,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"links": [
{
"oneClick": false,
"targetBlank": true,
"title": "Portainer host",
"url": "https://portainer.kaz.bzh/#!/${__field.labels.portainer_id}/docker/containers"
},
{
"targetBlank": true,
"title": "Portainer container",
"url": "https://portainer.kaz.bzh/#!/${__field.labels.portainer_id}/docker/containers/${__field.labels.id.21}${__field.labels.id.22}${__field.labels.id.23}${__field.labels.id.24}${__field.labels.id.25}${__field.labels.id.26}${__field.labels.id.27}${__field.labels.id.28}${__field.labels.id.29}${__field.labels.id.30}${__field.labels.id.31}${__field.labels.id.32}"
}
],
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "percent"
},
"overrides": [
{
"__systemRef": "hideSeriesFrom",
"matcher": {
"id": "byNames",
"options": {
"mode": "exclude",
"names": [
"lagalette-orga/lagalette-wpServ"
],
"prefix": "All except:",
"readOnly": true
}
},
"properties": [
{
"id": "custom.hideFrom",
"value": {
"legend": false,
"tooltip": false,
"viz": true
}
}
]
}
]
},
"gridPos": {
"h": 7,
"w": 24,
"x": 0,
"y": 4
},
"id": 2,
"options": {
"legend": {
"calcs": [
"mean",
"lastNotNull"
],
"displayMode": "table",
"placement": "right",
"showLegend": true,
"sortBy": "Mean",
"sortDesc": true
},
"tooltip": {
"hideZeros": false,
"mode": "multi",
"sort": "desc"
}
},
"pluginVersion": "11.6.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "rate(container_cpu_user_seconds_total{image!=\"\", host=\"$host\"}[5m]) * 100",
"intervalFactor": 2,
"legendFormat": "{{container_label_com_docker_compose_project}}/{{name}}",
"metric": "cpu",
"range": true,
"refId": "A",
"step": 10
}
],
"title": "CPU Usage",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 2,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": true,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"links": [
{
"targetBlank": true,
"title": "Portainer host",
"url": "https://portainer.kaz.bzh/#!/${__field.labels.portainer_id}/docker/containers"
},
{
"targetBlank": true,
"title": "Portainer container",
"url": "https://portainer.kaz.bzh/#!/${__field.labels.portainer_id}/docker/containers/${__field.labels.id.21}${__field.labels.id.22}${__field.labels.id.23}${__field.labels.id.24}${__field.labels.id.25}${__field.labels.id.26}${__field.labels.id.27}${__field.labels.id.28}${__field.labels.id.29}${__field.labels.id.30}${__field.labels.id.31}${__field.labels.id.32}"
}
],
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "bytes"
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 24,
"x": 0,
"y": 11
},
"id": 1,
"links": [
{
"targetBlank": true,
"title": "Portainer",
"url": "https://portainer.kaz.bzh"
}
],
"options": {
"legend": {
"calcs": [
"mean",
"lastNotNull"
],
"displayMode": "table",
"placement": "right",
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "multi",
"sort": "desc"
}
},
"pluginVersion": "11.6.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "builder",
"expr": "container_memory_usage_bytes{image!=\"\", host=\"$host\"}",
"hide": false,
"intervalFactor": 2,
"legendFormat": "{{container_label_com_docker_compose_project}}/{{name}}",
"metric": "container_memory_usage_bytes",
"range": true,
"refId": "A",
"step": 10
}
],
"title": "Memory Usage",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 2,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": true,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "Bps"
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 12,
"x": 0,
"y": 18
},
"id": 3,
"options": {
"legend": {
"calcs": [
"mean",
"lastNotNull"
],
"displayMode": "table",
"placement": "right",
"showLegend": true,
"sortBy": "Mean",
"sortDesc": true
},
"tooltip": {
"hideZeros": false,
"mode": "multi",
"sort": "desc"
}
},
"pluginVersion": "11.6.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "irate(container_network_receive_bytes_total{image!=\"\", host=\"$host\"}[5m])",
"intervalFactor": 2,
"legendFormat": "{{container_label_com_docker_compose_project}}/{{name}}",
"metric": "container_network_receive_bytes_total",
"range": true,
"refId": "A",
"step": 20
}
],
"title": "Network Rx",
"transformations": [
{
"id": "renameByRegex",
"options": {
"regex": "(.*)",
"renamePattern": "$1"
}
}
],
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 2,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": true,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "Bps"
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 12,
"x": 12,
"y": 18
},
"id": 9,
"options": {
"legend": {
"calcs": [
"mean",
"lastNotNull"
],
"displayMode": "table",
"placement": "right",
"showLegend": true,
"sortBy": "Mean",
"sortDesc": true
},
"tooltip": {
"hideZeros": false,
"mode": "multi",
"sort": "desc"
}
},
"pluginVersion": "11.6.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "irate(container_network_transmit_bytes_total{image!=\"\", host=\"$host\"}[5m])",
"hide": false,
"intervalFactor": 2,
"legendFormat": "{{container_label_com_docker_compose_project}}/{{name}}",
"metric": "container_network_receive_bytes_total",
"range": true,
"refId": "B",
"step": 20
}
],
"title": "Network Tx",
"type": "timeseries"
}
],
"refresh": "30s",
"schemaVersion": 41,
"tags": [],
"templating": {
"list": [
{
"allowCustomValue": false,
"current": {},
"definition": "label_values(host)",
"includeAll": true,
"multi": true,
"name": "host",
"options": [],
"query": {
"qryType": 1,
"query": "label_values(host)",
"refId": "PrometheusVariableQueryEditor-VariableQuery"
},
"refresh": 1,
"regex": "",
"type": "query"
},
{
"baseFilters": [],
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"filters": [
{
"condition": "",
"key": "container_label_com_docker_compose_project",
"keyLabel": "container_label_com_docker_compose_project",
"operator": "=~",
"value": ".*",
"valueLabels": [
".*"
]
}
],
"hide": 1,
"name": "filter",
"type": "adhoc"
}
]
},
"time": {
"from": "now-3h",
"to": "now"
},
"timepicker": {},
"timezone": "browser",
"title": "Docker monitoring par host",
"uid": "eekgch7tdq8sgc",
"version": 29,
"weekStart": ""
}

View File

@@ -0,0 +1,442 @@
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "Bps"
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 12,
"x": 0,
"y": 14
},
"id": 84,
"options": {
"legend": {
"calcs": [
"mean",
"lastNotNull",
"max",
"min"
],
"displayMode": "table",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "multi",
"sort": "none"
}
},
"pluginVersion": "11.6.0",
"targets": [
{
"datasource": {
"type": "prometheus"
},
"editorMode": "code",
"expr": "rate(node_network_receive_bytes_total{host=\"$host\", device=~\"$device\"}[5m])",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "{{device}} - rx",
"range": true,
"refId": "A",
"step": 240
},
{
"datasource": {
"type": "prometheus"
},
"editorMode": "code",
"expr": "- rate(node_network_transmit_bytes_total{host=\"$host\", device=~\"$device\"}[5m])",
"hide": false,
"instant": false,
"legendFormat": "{{device}} - tx",
"range": true,
"refId": "B"
}
],
"title": "Network Traffic Rx",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus"
},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"links": [],
"mappings": [],
"max": 100,
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "percent"
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 12,
"x": 12,
"y": 14
},
"id": 174,
"options": {
"alertThreshold": true,
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "single",
"sort": "none"
}
},
"pluginVersion": "11.6.0",
"targets": [
{
"datasource": {
"type": "prometheus"
},
"editorMode": "code",
"expr": "(node_filesystem_size_bytes{host=\"$host\",fstype=~\"ext.*|xfs\",mountpoint !~\".*pod.*\"}-node_filesystem_free_bytes{host=\"$host\",fstype=~\"ext.*|xfs\",mountpoint !~\".*pod.*\"}) *100/(node_filesystem_avail_bytes{host=\"$host\",fstype=~\"ext.*|xfs\",mountpoint !~\".*pod.*\"}+(node_filesystem_size_bytes{host=\"$host\",fstype=~\"ext.*|xfs\",mountpoint !~\".*pod.*\"}-node_filesystem_free_bytes{host=\"$host\",fstype=~\"ext.*|xfs\",mountpoint !~\".*pod.*\"}))",
"format": "time_series",
"instant": false,
"interval": "",
"intervalFactor": 1,
"legendFormat": "{{mountpoint}}",
"refId": "A"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "node_filesystem_files_free{host=\"$host\",fstype=~\"ext.?|xfs\"} / node_filesystem_files{host=\"$host\",fstype=~\"ext.?|xfs\"}",
"hide": true,
"interval": "",
"legendFormat": "Inodes{{instance}}{{mountpoint}}",
"refId": "B"
}
],
"title": "Disk",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus"
},
"description": "Physical machines only",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "celsius"
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 12,
"x": 0,
"y": 21
},
"id": 175,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "single",
"sort": "none"
}
},
"pluginVersion": "11.6.0",
"targets": [
{
"editorMode": "code",
"expr": "node_thermal_zone_temp{host=\"$host\"}",
"legendFormat": "{{type}}-zone{{zone}}",
"range": true,
"refId": "A"
}
],
"title": "Temperature",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 12,
"x": 12,
"y": 21
},
"id": 176,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "single",
"sort": "none"
}
},
"pluginVersion": "11.6.0",
"targets": [
{
"editorMode": "code",
"expr": "rate(node_disk_reads_completed_total{host=\"$host\"}[2m])",
"legendFormat": "{{device}} reads",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus"
},
"editorMode": "code",
"expr": " rate(node_disk_writes_completed_total{host=~\"$host\"}[2m])",
"hide": false,
"instant": false,
"legendFormat": "{{device}} writes",
"range": true,
"refId": "B"
}
],
"title": "Disks IOs",
"type": "timeseries"
}
],
"preload": false,
"refresh": "5s",
"schemaVersion": 41,
"tags": [],
"templating": {
"list": [
{
"allowCustomValue": false,
"current": {
"text": "kazguel",
"value": "kazguel"
},
"definition": "label_values(host)",
"includeAll": false,
"name": "host",
"options": [],
"query": {
"qryType": 1,
"query": "label_values(host)",
"refId": "PrometheusVariableQueryEditor-VariableQuery"
},
"refresh": 1,
"regex": "",
"type": "query"
},
{
"allowCustomValue": false,
"current": {
"text": [
"ens18"
],
"value": [
"ens18"
]
},
"definition": "label_values(node_network_info{device!~\"br.*|veth.*|lo.*|tap.*|docker.*|vibr.*\"},device)",
"includeAll": true,
"label": "NIC",
"multi": true,
"name": "device",
"options": [],
"query": {
"qryType": 1,
"query": "label_values(node_network_info{device!~\"br.*|veth.*|lo.*|tap.*|docker.*|vibr.*\"},device)",
"refId": "PrometheusVariableQueryEditor-VariableQuery"
},
"refresh": 1,
"regex": "",
"type": "query"
}
]
},
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {},
"timezone": "",
"title": "Vue Serveur",
"uid": "deki6c3qvihhcd",
"version": 22
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,12 +1,108 @@
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_interval: 60s
evaluation_interval: 60s
scrape_timeout: 55s
rule_files:
- 'alert.rules'
scrape_configs:
- job_name: 'traefik'
scrape_interval: 5s
# unused for now
#- job_name: 'traefik'
# scrape_interval: 5s
# static_configs:
# - targets: ['reverse-proxy:8080']
- job_name: prometheus
static_configs:
- targets: ['dashboard.kaz.sns:8289','dashboard2.kaz.sns:8289']
- targets: ["prometheus:9090"]
- job_name: cadvisor-prod1
scheme: "https"
static_configs:
- targets: ["cadvisor-prod1.kaz.bzh:443"]
labels:
host: 'prod1'
portainer_id: 2
- job_name: cadvisor-prod2
scheme: "https"
static_configs:
- targets: ["cadvisor-prod2.kaz.bzh:443"]
labels:
host: 'prod2'
portainer_id: 4
- job_name: cadvisor-kazoulet
scheme: "https"
static_configs:
- targets: ["cadvisor-kazoulet.kaz.bzh:443"]
labels:
host: 'kazoulet'
portainer_id: 3
- job_name: cadvisor-tykaz
scheme: "https"
static_configs:
- targets: ["cadvisor-tykaz.kaz.bzh:443"]
labels:
host: 'tykaz'
portainer_id: 10
- job_name: cadvisor-kazguel
scheme: "https"
static_configs:
- targets: ["cadvisor-kazguel.kaz.bzh:443"]
labels:
host: 'kazguel'
portainer_id: 11
- job_name: cadvisor-kazkouil
scheme: "https"
static_configs:
- targets: ["cadvisor-dev.kazkouil.fr:443"]
labels:
host: 'kazkouil'
portainer_id: 5
- job_name: node-exporter-prod1
static_configs:
# - targets: ["prod1.kaz.bzh:9100","prod2.kaz.bzh:9100","kazoulet.kaz.bzh:9100","tykaz.kaz.bzh:9100","kazguel.kaz.bzh:9100","kazkouil.fr:9100"]
- targets: ["prod1.kaz.bzh:9100"]
labels:
host: 'prod1'
- job_name: node-exporter-prod2
static_configs:
# - targets: ["prod1.kaz.bzh:9100","prod2.kaz.bzh:9100","kazoulet.kaz.bzh:9100","tykaz.kaz.bzh:9100","kazguel.kaz.bzh:9100","kazkouil.fr:9100"]
- targets: ["prod2.kaz.bzh:9100"]
labels:
host: 'prod2'
- job_name: node-exporter-kazoulet
static_configs:
- targets: ["kazoulet.kaz.bzh:9100"]
labels:
host: 'kazoulet'
- job_name: node-exporter-tykaz
static_configs:
- targets: ["tykaz.kaz.bzh:9100"]
labels:
host: 'tykaz'
- job_name: node-exporter-kazguel
static_configs:
- targets: ["kazguel.kaz.bzh:9100"]
labels:
host: 'kazguel'
- job_name: node-exporter-kazkouil
static_configs:
- targets: ["kazkouil.fr:9100"]
labels:
host: 'kazkouil'

1
dockers/mastodon/.env Symbolic link
View File

@@ -0,0 +1 @@
../../config/dockers.env

View File

@@ -0,0 +1,6 @@
Initialiser la DB :
docker-compose run --rm web bundle exec rails db:setup
Créer un compte admin :
tootctl accounts create adminkaz --email admin@kaz.bzh --confirmed --role Owner
tootctl accounts approve adminkaz

View File

@@ -0,0 +1,184 @@
# This file is designed for production server deployment, not local development work
# For a containerized local dev environment, see: https://github.com/mastodon/mastodon/blob/main/docs/DEVELOPMENT.md#docker
services:
db:
container_name: ${mastodonDBName}
restart: ${restartPolicy}
image: postgres:14-alpine
shm_size: 256mb
networks:
- mastodonNet
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'postgres']
volumes:
- postgres:/var/lib/postgresql/data
# environment:
# - 'POSTGRES_HOST_AUTH_METHOD=trust'
env_file:
- ../../secret/env-mastodonDB
redis:
container_name: ${mastodonRedisName}
restart: ${restartPolicy}
image: redis:7-alpine
networks:
- mastodonNet
healthcheck:
test: ['CMD', 'redis-cli', 'ping']
volumes:
- redis:/data
# es:
# restart: always
# image: docker.elastic.co/elasticsearch/elasticsearch:7.17.4
# environment:
# - "ES_JAVA_OPTS=-Xms512m -Xmx512m -Des.enforce.bootstrap.checks=true"
# - "xpack.license.self_generated.type=basic"
# - "xpack.security.enabled=false"
# - "xpack.watcher.enabled=false"
# - "xpack.graph.enabled=false"
# - "xpack.ml.enabled=false"
# - "bootstrap.memory_lock=true"
# - "cluster.name=es-mastodon"
# - "discovery.type=single-node"
# - "thread_pool.write.queue_size=1000"
# networks:
# - external_network
# - internal_network
# healthcheck:
# test: ["CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1"]
# volumes:
# - ./elasticsearch:/usr/share/elasticsearch/data
# ulimits:
# memlock:
# soft: -1
# hard: -1
# nofile:
# soft: 65536
# hard: 65536
# ports:
# - '127.0.0.1:9200:9200'
web:
# You can uncomment the following line if you want to not use the prebuilt image, for example if you have local code changes
# build: .
container_name: ${mastodonServName}
image: ghcr.io/mastodon/mastodon:v4.3.6
restart: ${restartPolicy}
environment:
- LOCAL_DOMAIN=${mastodonHost}.${domain}
- SMTP_SERVER=smtp.${domain}
- SMTP_LOGIN=admin@${domain}
- SMTP_FROM_ADDRESS=admin@${domain}
env_file:
- env-config
- ../../secret/env-mastodonServ
- ../../secret/env-mastodonDB
command: bundle exec puma -C config/puma.rb
networks:
- mastodonNet
healthcheck:
# prettier-ignore
test: ['CMD-SHELL',"curl -s --noproxy localhost localhost:3000/health | grep -q 'OK' || exit 1"]
ports:
- '127.0.0.1:3000:3000'
depends_on:
- db
- redis
# - es
volumes:
- public_system:/mastodon/public/system
- images:/mastodon/app/javascript/images
labels:
- "traefik.enable=true"
- "traefik.http.routers.koz.rule=Host(`${mastodonHost}.${domain}`)"
- "traefik.http.services.koz.loadbalancer.server.port=3000"
- "traefik.docker.network=mastodonNet"
streaming:
# You can uncomment the following lines if you want to not use the prebuilt image, for example if you have local code changes
# build:
# dockerfile: ./streaming/Dockerfile
# context: .
container_name: ${mastodonStreamingName}
image: ghcr.io/mastodon/mastodon-streaming:v4.3.6
restart: ${restartPolicy}
environment:
- LOCAL_DOMAIN=${mastodonHost}.${domain}
- SMTP_SERVER=smtp.${domain}
- SMTP_LOGIN=admin@${domain}
- SMTP_FROM_ADDRESS=admin@${domain}
env_file:
- env-config
- ../../secret/env-mastodonServ
command: node ./streaming/index.js
networks:
- mastodonNet
healthcheck:
# prettier-ignore
test: ['CMD-SHELL', "curl -s --noproxy localhost localhost:4000/api/v1/streaming/health | grep -q 'OK' || exit 1"]
ports:
- '127.0.0.1:4000:4000'
depends_on:
- db
- redis
labels:
- "traefik.enable=true"
- "traefik.http.routers.kozs.rule=(Host(`${mastodonHost}.${domain}`) && PathPrefix(`/api/v1/streaming`))"
- "traefik.http.services.kozs.loadbalancer.server.port=4000"
- "traefik.docker.network=mastodonNet"
sidekiq:
# You can uncomment the following line if you want to not use the prebuilt image, for example if you have local code changes
# build: .
container_name: ${mastodonSidekiqName}
image: ghcr.io/mastodon/mastodon:v4.3.6
restart: ${restartPolicy}
environment:
- LOCAL_DOMAIN=${mastodonHost}.${domain}
- SMTP_SERVER=smtp.${domain}
- SMTP_LOGIN=admin@${domain}
- SMTP_FROM_ADDRESS=admin@${domain}
env_file:
- env-config
- ../../secret/env-mastodonServ
command: bundle exec sidekiq
depends_on:
- db
- redis
networks:
- mastodonNet
volumes:
- public_system:/mastodon/public/system
healthcheck:
test: ['CMD-SHELL', "ps aux | grep '[s]idekiq\ 6' || false"]
## Uncomment to enable federation with tor instances along with adding the following ENV variables
## http_hidden_proxy=http://privoxy:8118
## ALLOW_ACCESS_TO_HIDDEN_SERVICE=true
# tor:
# image: sirboops/tor
# networks:
# - external_network
# - internal_network
#
# privoxy:
# image: sirboops/privoxy
# volumes:
# - ./priv-config:/opt/config
# networks:
# - external_network
# - internal_network
volumes:
postgres:
redis:
public_system:
images:
networks:
mastodonNet:
external: true
name: mastodonNet

113
dockers/mastodon/env-config Normal file
View File

@@ -0,0 +1,113 @@
# This is a sample configuration file. You can generate your configuration
# with the `bundle exec rails mastodon:setup` interactive setup wizard, but to customize
# your setup even further, you'll need to edit it manually. This sample does
# not demonstrate all available configuration options. Please look at
# https://docs.joinmastodon.org/admin/config/ for the full documentation.
# Note that this file accepts slightly different syntax depending on whether
# you are using `docker-compose` or not. In particular, if you use
# `docker-compose`, the value of each declared variable will be taken verbatim,
# including surrounding quotes.
# See: https://github.com/mastodon/mastodon/issues/16895
# Federation
# ----------
# This identifies your server and cannot be changed safely later
# ----------
# LOCAL_DOMAIN=
# Redis
# -----
REDIS_HOST=redis
REDIS_PORT=
# PostgreSQL
# ----------
DB_HOST=db
#DB_USER=postgres
#DB_NAME=postgres
#DB_PASS=
DB_PORT=5432
# Elasticsearch (optional)
# ------------------------
ES_ENABLED=false
ES_HOST=localhost
ES_PORT=9200
# Authentication for ES (optional)
ES_USER=elastic
ES_PASS=password
# Secrets
# -------
# Make sure to use `bundle exec rails secret` to generate secrets
# -------
#SECRET_KEY_BASE=
#OTP_SECRET=
# Encryption secrets
# ------------------
# Must be available (and set to same values) for all server processes
# These are private/secret values, do not share outside hosting environment
# Use `bin/rails db:encryption:init` to generate fresh secrets
# Do NOT change these secrets once in use, as this would cause data loss and other issues
# ------------------
#ACTIVE_RECORD_ENCRYPTION_DETERMINISTIC_KEY=
#ACTIVE_RECORD_ENCRYPTION_KEY_DERIVATION_SALT=
#ACTIVE_RECORD_ENCRYPTION_PRIMARY_KEY=
# Web Push
# --------
# Generate with `bundle exec rails mastodon:webpush:generate_vapid_key`
# --------
#VAPID_PRIVATE_KEY=
#VAPID_PUBLIC_KEY=
# Sending mail
# ------------
#SMTP_SERVER=
SMTP_PORT=587
#SMTP_LOGIN=
#SMTP_PASSWORD=
#SMTP_FROM_ADDRESS=
# File storage (optional)
# -----------------------
S3_ENABLED=false
S3_BUCKET=files.example.com
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
S3_ALIAS_HOST=files.example.com
# IP and session retention
# -----------------------
# Make sure to modify the scheduling of ip_cleanup_scheduler in config/sidekiq.yml
# to be less than daily if you lower IP_RETENTION_PERIOD below two days (172800).
# -----------------------
IP_RETENTION_PERIOD=31556952
SESSION_RETENTION_PERIOD=31556952
# Fetch All Replies Behavior
# --------------------------
# When a user expands a post (DetailedStatus view), fetch all of its replies
# (default: false)
FETCH_REPLIES_ENABLED=false
# Period to wait between fetching replies (in minutes)
FETCH_REPLIES_COOLDOWN_MINUTES=15
# Period to wait after a post is first created before fetching its replies (in minutes)
FETCH_REPLIES_INITIAL_WAIT_MINUTES=5
# Max number of replies to fetch - total, recursively through a whole reply tree
FETCH_REPLIES_MAX_GLOBAL=1000
# Max number of replies to fetch - for a single post
FETCH_REPLIES_MAX_SINGLE=500
# Max number of replies Collection pages to fetch - total
FETCH_REPLIES_MAX_PAGES=500
SINGLE_USER_MODE=false
#EMAIL_DOMAIN_ALLOWLIST=

View File

@@ -1,7 +1,7 @@
services:
app:
image: mattermost/mattermost-team-edition:10.5
image: mattermost/mattermost-team-edition:10.9.1
container_name: ${mattermostServName}
restart: ${restartPolicy}
volumes:

View File

@@ -11,3 +11,7 @@ cd $(dirname $0)
"${KAZ_BIN_DIR}/gestContainers.sh" --install -M -agora
docker exec ${mattermostServName} mmctl auth login https://${matterHost}.${domain} --name local-server --username ${mattermost_MM_ADMIN_USER} --password ${mattermost_MM_ADMIN_PASSWORD}
docker exec ${mattermostServName} mmctl channel create --team kaz --name "une-question--un-soucis" --display-name "Une question ? Un souci ?"
docker exec ${mattermostServName} mmctl channel create --team kaz --name "cafe-du-commerce--ouvert-2424h" --display-name "Café du commerce"
docker exec ${mattermostServName} mmctl channel create --team kaz --name "creation-comptes" --display-name "Création comptes"

View File

@@ -1,4 +1,4 @@
FROM paheko/paheko:1.3.12
FROM paheko/paheko:1.3.15
ENV PAHEKO_DIR /var/www/paheko
@@ -11,6 +11,9 @@ RUN mkdir ${PAHEKO_DIR}/users
RUN docker-php-ext-install calendar
RUN apt-get update
RUN apt-get install -y libwebp-dev
RUN docker-php-ext-configure gd --with-jpeg --with-freetype --with-webp
RUN docker-php-ext-install gd
#Plugin facturation (le seul qui ne fasse pas parti de la distribution de base
RUN apt-get install unzip

View File

@@ -127,4 +127,4 @@ define('Paheko\SHOW_ERRORS', true);
#add by fab le 21/04/2022
//const PDF_COMMAND = 'prince';
# const PDF_COMMAND = 'auto';
const PDF_COMMAND = 'chromium --no-sandbox --headless --disable-dev-shm-usage --autoplay-policy=no-user-gesture-required --no-first-run --disable-gpu --disable-features=DefaultPassthroughCommandDecoder --use-fake-ui-for-media-stream --use-fake-device-for-media-stream --disable-sync --print-to-pdf=%2$s %1$s';
const PDF_COMMAND = 'chromium --no-sandbox --headless --no-pdf-header-footer --disable-dev-shm-usage --autoplay-policy=no-user-gesture-required --no-first-run --disable-gpu --disable-features=DefaultPassthroughCommandDecoder --use-fake-ui-for-media-stream --use-fake-device-for-media-stream --disable-sync --print-to-pdf=%2$s %1$s';

View File

@@ -0,0 +1,84 @@
services:
webserver:
image: chocobozzz/peertube-webserver:latest
restart: ${restartPolicy}
depends_on:
- peertube
networks:
- peertubeNet
#ports:
#- "80:80"
#- "443:443"
volumes:
- assets:/var/www/peertube/peertube-latest/client/dist:ro
- data:/var/www/peertube/storage
env_file:
- ../../secret/env-${peertubeServName}
labels:
- "traefik.enable=true"
- "traefik.http.routers.${peertubeServName}.rule=Host(`${peertubeHost}.${domain}`)"
- "traefik.docker.network=peertubeNet"
peertube:
image: chocobozzz/peertube:production-bookworm
container_name: ${peertubeServName}
restart: ${restartPolicy}
depends_on:
- postgres
- redis
networks:
- peertubeNet
volumes:
# Remove the following line if you want to use another webserver/proxy or test PeerTube in local
- assets:/app/client/dist
- data:/data
- config:/config
env_file:
- ../../secret/env-${peertubeServName}
labels:
- "traefik.enable=true"
- "traefik.http.routers.${peertubeServName}.rule=Host(`${peertubeHost}.${domain}`)"
- "traefik.docker.network=peertubeNet"
- "traefik.http.services.${peertubeServName}.loadbalancer.server.port=9000"
#traefik.frontend.rule: "Host:videos.kaz.bzh"
#traefik.port: "9000"
# traefik.frontend.redirect.entryPoint: https
postgres:
image: postgres:13-alpine
container_name: ${peertubeDBName}
restart: ${restartPolicy}
networks:
- peertubeNet
volumes:
- db:/var/lib/postgresql/data
env_file:
- ../../secret/env-${peertubeDBName}
labels:
traefik.enable: "false"
redis:
image: redis:6-alpine
container_name: peertubeCache
restart: ${restartPolicy}
networks:
- peertubeNet
env_file:
- ../../secret/env-${peertubeServName}
volumes:
- redis:/data
labels:
traefik.enable: "false"
volumes:
assets:
data:
config:
db:
redis:
networks:
peertubeNet:
external: true
name: peertubeNet

View File

@@ -1,4 +1,4 @@
FROM docker.io/mailserver/docker-mailserver:14.0.0
FROM docker.io/mailserver/docker-mailserver:15.0.2
########################################
# APT local cache

View File

@@ -26,7 +26,7 @@ services:
- filterConfig:/home/filter/config/
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- /etc/letsencrypt:/etc/letsencrypt:ro
- /etc/ssl:/etc/ssl:ro
# - /etc/ssl:/etc/ssl:ro
# - /usr/local/share/ca-certificates:/usr/local/share/ca-certificates:ro
environment:
@@ -41,6 +41,14 @@ services:
cap_add:
- NET_ADMIN
- SYS_PTRACE
labels:
- "traefik.enable=true"
- "traefik.http.routers.mail.rule=Host(`mail.${domain}`) || Host(`smtp.${domain}`)"
- "traefik.http.routers.webmails.rule=Host(`webmail.${domain}`)"
- "traefik.http.middlewares.reg-webmails.redirectregex.regex=^https://webmail.${domain}(.*)"
- "traefik.http.middlewares.reg-webmails.redirectregex.replacement=https://kaz.bzh/relever-ses-mails-chez-kaz-via-un-webmail"
- "traefik.http.middlewares.reg-webmails.redirectregex.permanent=true"
- "traefik.http.routers.webmails.middlewares=reg-webmails"
volumes:
mailData:

View File

@@ -94,10 +94,10 @@ SMTP_ONLY=
# custom => Enables custom certificates
# manual => Let's you manually specify locations of your SSL certificates for non-standard cases
# self-signed => Enables self-signed certificates
#SSL_TYPE=self-signed
SSL_TYPE=letsencrypt
#SSL_CERT_PATH=
#SSL_KEY_PATH=
SSL_TYPE=manual
#SSL_TYPE=letsencrypt
SSL_CERT_PATH=/etc/ssl/certs/mail.pem
SSL_KEY_PATH=/etc/ssl/private/mail.key
# Set how many days a virusmail will stay on the server before being deleted
# empty => 7 days

View File

@@ -26,7 +26,7 @@ services:
- ../../secret/env-${roundcubeServName}
labels:
- "traefik.enable=true"
- "traefik.http.routers.${roundcubeServName}.rule=Host(`${webmailHost}.${domain}`) || host(`roundcube.${domain}`)"
- "traefik.http.routers.${roundcubeServName}.rule=host(`roundcube.${domain}`)"
- "traefik.docker.network=roundcubeNet"
db:

View File

@@ -0,0 +1,42 @@
services:
db:
image: mariadb:11.4
container_name: ${spipDBName}
restart: ${restartPolicy}
env_file:
- ../../secret/env-${spipDBName}
volumes:
- spipDB:/var/lib/mysql
networks:
- spipNet
spip:
image: ipeos/spip:4.4
restart: ${restartPolicy}
container_name: ${spipServName}
env_file:
- ../../secret/env-${spipServName}
links:
- db:mysql
environment:
- SPIP_AUTO_INSTALL=1
- SPIP_DB_HOST=${spipDBName}
- SPIP_SITE_ADDRESS=https://${spipHost}.${domain}
expose:
- 80
labels:
- "traefik.enable=true"
- "traefik.http.routers.${spipServName}.rule=Host(`${spipHost}.${domain}`)"
networks:
- spipNet
volumes:
- spipData:/usr/src/spip
volumes:
spipDB:
spipData:
networks:
spipNet:
external: true
name: spipNet

View File

@@ -99,7 +99,7 @@ RUN echo "root: ADMIN_EMAIL" >> /etc/aliases \
RUN echo aliases_program postalias >>/etc/sympa/sympa/sympa.conf \
&& echo sendmail /usr/sbin/sendmail >>/etc/sympa/sympa/sympa.conf \
&& echo soap_url /sympasoap >>/etc/sympa/sympa/sympa.conf \
&& echo dmarc_protection.mode dmarc_reject >>/etc/sympa/sympa/sympa.conf \
&& echo dmarc_protection.mode dmarc_reject,dmarc_quarantine >>/etc/sympa/sympa/sympa.conf \
&& cp /usr/share/doc/sympa/examples/script/sympa_soap_client.pl.gz /usr/lib/sympa/bin/ \
&& gunzip /usr/lib/sympa/bin/sympa_soap_client.pl.gz \
&& chmod +x /usr/lib/sympa/bin/sympa_soap_client.pl \

View File

@@ -3,6 +3,7 @@ orange.com veryslow:
wanadoo.com veryslow:
wanadoo.fr veryslow:
gmail.com slow:
laposte.net slow:
yahoo.com slow:
yahoo.fr slow:
outlook.com veryslow:

View File

@@ -16,7 +16,6 @@ services:
- ${jirafeauServName}:${fileHost}
ports:
- ${SYMPA_IP}:25:25
- ${SYMPA_IP}:80:80
- ${SYMPA_IP}:443:443
env_file:
- ../../secret/env-${sympaServName}
@@ -33,7 +32,12 @@ services:
- ./config/transport:/etc/postfix/transport:rw
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- /etc/letsencrypt:/etc/letsencrypt:ro
- /etc/ssl:/etc/ssl:ro
labels:
- "traefik.enable=true"
- "traefik.http.routers.sympa.rule=host(`listes.${domain}`)"
- "traefik.docker.network=sympaNet"
db:
image: mariadb:10.5

View File

@@ -1,6 +1,6 @@
services:
reverse-proxy:
image: traefik:v3.3.4
image: traefik:v3.4.4
container_name: ${traefikServName}
restart: ${restartPolicy}
# Enables the web UI and tells Traefik to listen to docker
@@ -11,6 +11,7 @@ services:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./conf:/etc/traefik/
- letsencrypt:/letsencrypt
- log:/log
environment:
- TRAEFIK_PROVIDERS_DOCKER=true
- TRAEFIK_PROVIDERS_DOCKER_EXPOSEDBYDEFAULT=false
@@ -25,11 +26,19 @@ services:
- TRAEFIK_CERTIFICATESRESOLVERS_letsencrypt_ACME_EMAIL=admin@${domain}
- TRAEFIK_CERTIFICATESRESOLVERS_letsencrypt_ACME_CASERVER=${acme_server}
- TRAEFIK_CERTIFICATESRESOLVERS_letsencrypt_ACME_STORAGE=/letsencrypt/acme.json
- TRAEFIK_CERTIFICATESRESOLVERS_letsencrypt_ACME_TLSCHALLENGE=true
- TRAEFIK_LOG_LEVEL=INFO
- TRAEFIK_CERTIFICATESRESOLVERS_letsencrypt_ACME_HTTPCHALLENGE_ENTRYPOINT=web
- TRAEFIK_API_DASHBOARD=true
#pour la migration vers traefik3
- TRAEFIK_CORE_DEFAULTRULESYNTAX=v3
- TZ=Europe/Paris
- TRAEFIK_ACCESSLOG=true
- TRAEFIK_ACCESSLOG_FILEPATH=/log/traefik_acces.log
- TRAEFIK_ACCESSLOG_FILTERS_STATUSCODES=404,403,401
- TRAEFIK_LOG=true
- TRAEFIK_LOG_LEVEL=INFO
- TRAEFIK_LOG_FILEPATH=/log/traefik.log
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik_https.rule=Host(`${site}.${domain}`) && PathPrefix(`/api`, `/dashboard`)"
@@ -98,6 +107,15 @@ services:
{{apikaz
- apikazNet
}}
{{mastodon
- mastodonNet
}}
{{peertube
- peertubeNet
}}
{{spip
- spipNet
}}
#### BEGIN ORGA USE_NET
#### END ORGA USE_NET
@@ -201,9 +219,26 @@ networks:
external: true
name: apikazNet
}}
{{mastodon
mastodonNet:
external: true
name: mastodonNet
}}
{{peertube
peertubeNet:
external:true
name:peertubeNet
}}
{{spip
spipNet:
external:true
name:spipNet
}}
#### BEGIN ORGA DEF_NET
#### END ORGA DEF_NET
volumes:
letsencrypt:
log:

View File

@@ -67,3 +67,59 @@ div.kaz::after {
border-width: thin;
border-color: red;
}
div.kaz2:hover {
font-size: initial !important;
color: initial !important;
}
div.kaz2:hover a.kaz2 {
background-size: initial !important;
padding: 4px 0 4px 230px;
}
div.kaz2 a.kaz2 {
background-size: 110px 12px;
padding: 4px 0 4px 120px;
}
div.kaz2 {
font-size: 10px;
color: #969696;
padding: 1pc 0 0 0;
margin: 0 0 0 80px;
min-height: 200px;
clear: left;
}
div.kaz2::before {
content: url("/m/logo.png");
position: absolute;
padding: 0;
margin: 0 0 0 -70px;
width: 50px;
height: 100px;
}
div.kaz2>ul>li {
list-style-type: none; /* Remove bullets */
}
div.kaz2>ul>li::before {
content: "\2713";
color: green;
margin-left: -20px;
margin-right: 10px;
}
a.kaz2 {
background-image: url("/m/coche.png");
background-repeat: no-repeat;
padding: 4px 0 4px 230px;
margin: 0 0 0 0;
min-height: 25px;
}
div.kaz2 div.nb {
padding: 1pc;
margin: 0 0 0 -70px;
display: block;
border-radius: 30px;
border-style: solid;
border-width: thin;
border-color: red;
}

View File

@@ -48,30 +48,18 @@ gandi_dns_gandi_api_key="${gandi_GANDI_KEY}"
####################
# mattermost
mattermost_MYSQL_ROOT_PASSWORD="--clean_val--"
mattermost_MYSQL_DATABASE="--clean_val--"
mattermost_MYSQL_USER="--clean_val--"
mattermost_MYSQL_PASSWORD="--clean_val--"
mattermost_POSTGRES_USER="mattermost"
mattermost_POSTGRES_PASSWORD="--clean_val--"
mattermost_POSTGRES_DB="mattermost"
# Share with mattermostDB
mattermost_MM_DBNAME="${mattermost_MYSQL_DATABASE}"
mattermost_MM_USERNAME="${mattermost_MYSQL_USER}"
mattermost_MM_PASSWORD="${mattermost_MYSQL_PASSWORD}"
mattermost_DB_PORT_NUMBER="3306"
mattermost_DB_HOST="db"
mattermost_MM_SQLSETTINGS_DRIVERNAME="mysql"
mattermost_MM_ADMIN_EMAIL="admin@kaz.bzh"
# mattermost_MM_SQLSETTINGS_DATASOURCE = "MM_USERNAME:MM_PASSWORD@tcp(DB_HOST:DB_PORT_NUMBER)/MM_DBNAME?charset=utf8mb4,utf8&readTimeout=30s&writeTimeout=30s"
# Don't forget to replace all entries (beginning by MM_ and DB_) in MM_SQLSETTINGS_DATASOURCE with the real variables values.
mattermost_MM_SQLSETTINGS_DATASOURCE="${mattermost_MYSQL_USER}:${mattermost_MYSQL_PASSWORD}@tcp(${mattermost_DB_HOST}:${mattermost_DB_PORT_NUMBER})/${mattermost_MM_DBNAME}?charset=utf8mb4,utf8&readTimeout=30s&writeTimeout=30s"
# sinon avec postgres
# mattermost_MM_SQLSETTINGS_DATASOURCE = "postgres://${MM_USERNAME}:${MM_PASSWORD}@db:5432/${MM_DBNAME}?sslmode=disable&connect_timeout=10"
mattermost_MM_ADMIN_EMAIL="${matterHost}@${domain}"
mattermost_MM_ADMIN_USER="admin-mattermost"
mattermost_MM_ADMIN_PASSWORD="--clean_val--@"
mattermost_MM_SQLSETTINGS_DATASOURCE="postgres://${mattermost_POSTGRES_USER}:${mattermost_POSTGRES_PASSWORD}@postgres:5432/${mattermost_POSTGRES_DB}?sslmode=disable&connect_timeout=10"
# pour envoyer des messages sur l'agora avec mmctl
mattermost_user="admin-mattermost"
mattermost_pass="--clean_val--"
mattermost_user="${mattermost_MM_ADMIN_USER}"
mattermost_pass="${mattermost_MM_ADMIN_PASSWORD}"
mattermost_token="xxx-private"
##################
@@ -303,7 +291,61 @@ castopod_CP_EMAIL_SMTP_PASSWORD=
castopod_CP_EMAIL_FROM=noreply@${domain}
castopod_CP_EMAIL_SMTP_CRYPTO=tls
######################
#####################
# Spip
spip_MYSQL_ROOT_PASSWORD="--clean_val--"
spip_MYSQL_DATABASE="--clean_val--"
spip_MYSQL_USER="--clean_val--"
spip_MYSQL_PASSWORD="--clean_val--"
spip_SPIP_AUTO_INSTALL=1
spip_SPIP_DB_SERVER=mysql
spip_SPIP_DB_LOGIN="${spip_MYSQL_USER}"
spip_SPIP_DB_PASS="${spip_MYSQL_PASSWORD}"
spip_SPIP_DB_NAME="${spip_MYSQL_DATABASE}"
spip_SPIP_ADMIN_NAME=admin
spip_SPIP_ADMIN_LOGIN=admin
spip_SPIP_ADMIN_EMAIL=admin@${domain}
spip_SPIP_ADMIN_PASS="--clean_val--"
spip_PHP_TIMEZONE="Europe/Paris"
#####################
# Peertube
peertube_POSTGRES_USER="--clean_val--"
peertube_POSTGRES_PASSWORD="--clean_val--"
peertube_PEERTUBE_DB_NAME="--clean_val--"
peertube_PEERTUBE_DB_USERNAME="${peertube_POSTGRES_USER}"
peertube_PEERTUBE_DB_PASSWORD="${peertube_POSTGRES_PASSWORD}"
peertube_PEERTUBE_DB_SSL=false
peertube_PEERTUBE_DB_HOSTNAME="${peertubeDBName}"
peertube_PEERTUBE_WEBSERVER_HOSTNAME="${peertubeHost}.${domain}"
peertube_PEERTUBE_TRUST_PROXY="['10.0.0.0/8', '127.0.0.1', 'loopback', '172.18.0.0/16']"
peertube_PEERTUBE_SECRET="--clean_val--"
peertube_PT_INITIAL_ROOT_PASSWORD="--clean_val--"
#peertube_PEERTUBE_SMTP_USERNAME=
#peertube_PEERTUBE_SMTP_PASSWORD=
# Default to Postfix service name "postfix" in docker-compose.yml
# May be the hostname of your Custom SMTP server
peertube_PEERTUBE_SMTP_HOSTNAME=
peertube_PEERTUBE_SMTP_PORT=25
peertube_PEERTUBE_SMTP_FROM=
peertube_PEERTUBE_SMTP_TLS=false
peertube_PEERTUBE_SMTP_DISABLE_STARTTLS=false
peertube_PEERTUBE_ADMIN_EMAIL=
peertube_POSTFIX_myhostname=
#peertube_OPENDKIM_DOMAINS=peertube
peertube_OPENDKIM_RequireSafeKeys=no
peertube_PEERTUBE_OBJECT_STORAGE_UPLOAD_ACL_PUBLIC="public-read"
peertube_PEERTUBE_OBJECT_STORAGE_UPLOAD_ACL_PRIVATE="private"
######################
peertube_POSTGRES_DB="${peertube_PEERTUBE_DB_NAME}"
######################
# SNAPPYMAIL
# Url https://snappymail.${domain}/?admin
# au premier lancement un mot de passe est généré en aut par l' appli dans le
@@ -313,3 +355,11 @@ castopod_CP_EMAIL_SMTP_CRYPTO=tls
snappymail_TZ="Europe/Paris"
snappymail_UPLOAD_MAX_SIZE="100M"
####################
# mastodon
mastodon_POSTGRES_USER="--clean_val--"
mastodon_POSTGRES_PASSWORD="--clean_val--"
mastodon_POSTGRES_DB=mastodon
mastodon_DB_USER="${mastodon_POSTGRES_USER}"
mastodon_DB_PASS="${mastodon_POSTGRES_PASSWORD}"
mastodon_DB_NAME=mastodon

View File

@@ -0,0 +1,3 @@
ALWAYSDATA_TOKEN=
ALWAYSDATA_API=
ALWAYSDATA_ACCOUNT=

View File

@@ -0,0 +1,6 @@
DB_USER=
DB_NAME=
DB_PASS=
POSTGRES_USER=
POSTGRES_PASSWORD=
POSTGRES_DB=postgres

View File

@@ -0,0 +1,10 @@
SECRET_KEY_BASE=
OTP_SECRET=
ACTIVE_RECORD_ENCRYPTION_DETERMINISTIC_KEY=
ACTIVE_RECORD_ENCRYPTION_KEY_DERIVATION_SALT=
ACTIVE_RECORD_ENCRYPTION_PRIMARY_KEY=
VAPID_PRIVATE_KEY==
VAPID_PUBLIC_KEY=
SMTP_PASSWORD=
EMAIL_DOMAIN_ALLOWLIST=
ADMIN_PASSWORD=

View File

@@ -1,8 +1,3 @@
MYSQL_ROOT_PASSWORD=
MYSQL_DATABASE=
MYSQL_USER=
MYSQL_PASSWORD=
MM_MYSQL_USER=
MM_MYSQL_PASSWORD=
POSTGRES_USER=
POSTGRES_PASSWORD=
POSTGRES_DB=

View File

@@ -1,15 +1,4 @@
# share with matterDB
MM_DBNAME=
MM_USERNAME=
MM_PASSWORD=
MM_SQLSETTINGS_DATASOURCE=
MM_ADMIN_EMAIL=
MM_ADMIN_USER=
MM_ADMIN_PASSWORD=
DB_HOST=
DB_PORT_NUMBER=
MM_SQLSETTINGS_DRIVERNAME=
MM_SQLSETTINGS_DATASOURCE=
MM_ADMIN_PASSWORD=

4
secret.tmpl/env-spipDB Normal file
View File

@@ -0,0 +1,4 @@
MYSQL_ROOT_PASSWORD=
MYSQL_DATABASE=
MYSQL_USER=
MYSQL_PASSWORD=

10
secret.tmpl/env-spipServ Normal file
View File

@@ -0,0 +1,10 @@
SPIP_AUTO_INSTALL=1
SPIP_DB_SERVER=mysql
SPIP_DB_LOGIN=
SPIP_DB_PASS=
SPIP_DB_NAME=
SPIP_ADMIN_NAME=
SPIP_ADMIN_LOGIN=
SPIP_ADMIN_EMAIL=
SPIP_ADMIN_PASS=
PHP_TIMEZONE=