Compare commits

138 Commits

Author SHA1 Message Date
974f826757 Merge branch 'master' into feat/python 2025-09-08 17:11:58 +02:00
b1b9f3afed sympa rien à voir 2025-09-08 17:11:19 +02:00
e3041f2df6 Pb fin de lignes windows ... 2025-09-08 01:38:43 +02:00
060c6e4443 Le dossier dossier wikiconf sert bien ! 2025-09-07 21:44:53 +02:00
13826a7c4c Nom pour le container spip des orgas 2025-09-07 21:42:04 +02:00
nom
ab2d4e1610 upgrade traefik en 3.5.1 2025-09-04 14:04:26 +02:00
fd48ef488f Mais siii le réseau il est créé par container.sh start 5 lignes plus bas. 2025-09-04 00:54:18 +02:00
731c5dc8a0 GetPassword supprimé car on source et c'est mieux ! 2025-09-03 23:52:01 +02:00
ea817aba47 Pb init wp et wiki. 2025-09-03 22:57:03 +02:00
bca0693a14 Merge branch 'gestionSecrets' 2025-09-03 21:49:21 +02:00
nom
0d00b418a0 upgrade MM. attention, cette image est slim. Plus de curl, j'ai donc enlevé le health check DB 2025-08-31 18:33:46 +02:00
98cc875611 fix castopod/firefox 2025-08-14 13:06:30 +02:00
618f22db6b KAZ_KEY_DIR 2025-08-07 20:27:05 +02:00
290c6fe360 suppression de SetAllPass 2025-08-07 20:19:00 +02:00
6619246346 fix no password 2025-08-06 00:04:26 +02:00
d10fb08f71 Merge branch 'gestionSecrets' of git+ssh://git.kaz.bzh:2202/KAZ/KazV2 into gestionSecrets 2025-08-05 23:31:01 +02:00
b9e605a359 Pb init volumes 2025-08-05 23:28:46 +02:00
a3f448b457 missing .env 2025-07-31 14:33:47 +02:00
77a3819beb Il faut créer le dossier secret pour chaque orga ! (cas de maj du docker-compose après ajout de service) 2025-07-31 14:18:57 +02:00
ec16cdfe92 Quelques appels restants + script de migration 2025-07-31 06:16:36 +02:00
6877a5f872 Le init db est fait dans le initdb.sh plutôt ... 2025-07-31 05:36:32 +02:00
3a8bd9ec1a Merge branch 'gestionSecrets' of git+ssh://git.kaz.bzh:2202/KAZ/KazV2 into gestionSecrets 2025-07-31 05:05:13 +02:00
1f9ccff5b6 Les orgas + qques changements pour getpasswords.sh 2025-07-31 05:04:29 +02:00
ff69724f86 droits d'execution sur les scripts 2025-07-31 01:55:59 +02:00
3b5d01d5df yaml veut des espaces 2025-07-31 00:14:34 +02:00
99779a70ff Récupération des valeurs du docker.env ! 2025-07-30 23:08:48 +02:00
400775bf41 removing double quotes + divers petits bugs 2025-07-30 02:57:38 +02:00
8baf9fc492 Merge branch 'gestionSecrets' of git+ssh://git.kaz.bzh:2202/KAZ/KazV2 into gestionSecrets 2025-07-29 16:37:45 +02:00
8d26a57b6b A tester : la génération des mots de passe ! 2025-07-29 16:33:17 +02:00
4d127a57e2 inscription utilisateur complète 2025-07-28 17:07:01 +02:00
3a3c4f4d0c version a jourt de roundcube 2025-07-27 10:45:14 +02:00
898d6a652d Revert "modif docker compose pour avoir la dernière version de roundcube"
This reverts commit 3bf952b57f.
2025-07-27 10:28:39 +02:00
3bf952b57f modif docker compose pour avoir la dernière version de roundcube 2025-07-27 09:47:27 +02:00
70442f6464 fix fix 2025-07-26 17:59:16 +02:00
33f793fcbe fix cert sympa 2025-07-26 17:51:13 +02:00
6e58f328e4 Merge branch 'master' into feat/python 2025-07-26 15:45:22 +02:00
813e0e761f typo 2025-07-26 15:41:38 +02:00
2e62e9782e mattermost canal creation comptes 2025-07-26 15:40:15 +02:00
9f0b8f2e1e paheko standby 2025-07-26 15:39:49 +02:00
fc4adc0fae mattermost first launch 2025-07-26 15:32:54 +02:00
74812fa79a mattermost admin user/pass 2025-07-26 14:45:27 +02:00
490c527d9a smallies 2025-07-26 14:41:31 +02:00
3220d862a6 fix mattermost 2025-07-26 13:52:15 +02:00
51cd89c16d smallies 2025-07-25 15:55:30 +02:00
1936326535 fix default matterport 2025-07-25 15:52:47 +02:00
a630e47bfe fix mattermost pgsql 2025-07-25 15:10:43 +02:00
b4eee312df python 2025-07-25 14:49:46 +02:00
27ca4dfce3 init python 2025-07-24 21:47:54 +02:00
HPL
5fbc804edd Actualiser secret.tmpl/SetAllPass.sh 2025-07-23 06:34:30 +02:00
44ff3980f9 SetAllPass a disparu ! Reste le secretgen à refaire + revoir les valeurs "liées" par setallpass. Rien n'est testé pour le moment. 2025-07-23 03:19:27 +02:00
33fc237cb8 fix traefik 2025-07-17 18:26:36 +02:00
ed5ef23ed2 fix traefik 2025-07-17 17:23:13 +02:00
6f33808736 fix vm 2025-07-17 16:13:30 +02:00
nom
477a9155fe upgrade traefik to 3.4.4 2025-07-16 06:43:16 +02:00
bce3b9eff5 Nouveau service spip ! 2025-07-05 00:36:43 +02:00
d506f000a3 grafana: save current dashboards
Add custom dashboards, remove unused ones.
2025-06-25 22:57:36 +02:00
nom
8906974a83 upgrade MM en 10.9.1 2025-06-23 17:23:16 +02:00
nom
c12cafc277 empêche les écho dans interoPahko (pour éviter les mails) 2025-06-20 09:33:55 +02:00
nom
f268f5f5f4 corrige tty sur createUser sur les docker exec maintenenat qu'on est en cron 2025-06-20 09:22:55 +02:00
nom
d8bc48ec3a vire echo "Rien à créer" 2025-06-20 07:01:42 +02:00
3940c3801d fin de la commande Setup 2025-06-19 23:43:58 +02:00
00f9e3ee5f ajout de laposte.net 2025-06-19 21:18:18 +02:00
nom
1bacfd307c vire snappymail de NC car plus supporté actuellemment 2025-06-18 08:10:08 +02:00
nom
8f6913565c rollback, il faut mariadb 11.4 pour les version de NC en cours 2025-06-15 09:57:41 +02:00
nom
62b34e4ac0 mariadb = mariaddb:latest 2025-06-15 09:55:01 +02:00
nom
70c32de959 tente une image mariadb latest sur docker-compose de wp et cloud !
rejoute MARIADB_AUTO_UPGRADE=1 dans docker-compose
2025-06-15 09:52:49 +02:00
nom
3eedd4293b je tente un mariadb:latest pour gitea, pas taper :-p 2025-06-15 09:09:24 +02:00
nom
a2f737eb46 upgrade la base mariadb en auto 2025-06-15 09:03:54 +02:00
nom
82a3440d5a upgrade mariadb to 11.8.2 2025-06-15 08:58:51 +02:00
a3e86ac6ac dmarc sympa 2025-06-11 09:38:19 +02:00
556471d321 doc certbot 2025-06-10 09:45:52 +02:00
9d666afab5 certbot dns chall pour alwaysdata 2025-06-10 09:43:17 +02:00
5eb4ccb58e mastodon 2025-06-06 14:51:27 +02:00
nom
84849b71b1 mets les logs traefik dans un volume
affiches les erreurs 4xx (pour utilisation d'un fail2ban sur le host)
2025-06-06 09:59:41 +02:00
nom
316206140a suppr --no-pdf-header-footer du chromium headless pour impression pdf 2025-05-27 10:46:34 +02:00
nom
7cc7df6ac1 upgrade MM 2025-05-21 19:47:14 +02:00
nom
0d1c13d125 upgrade MM to 10.8 2025-05-20 07:07:34 +02:00
nom
cb9a449882 upgrade de séu sur paheko 2025-05-16 16:16:28 +02:00
nom
678388afaa maj paheko en 1.3.14 2025-05-13 14:13:25 +02:00
016b47774b prometheus: portainer ids pour grafana
Ajout des ids portainer de chaque machine pour générer une url.
2025-05-11 22:00:07 +02:00
nom
6db4d1a5a8 ajout les logs pour les erreurs 404 403 401 renvoyées par le reverse 2025-05-11 09:32:07 +02:00
f54de7a26c update css 2025-05-10 16:52:20 +02:00
nom
75678ca093 enlève l'url en dur 2025-05-10 10:00:42 +02:00
nom
554d7a5ddc upgrade mailServer en 15.0.2 2025-05-10 09:39:17 +02:00
62e75a42f2 mastodon passwords WIP 2025-05-09 16:52:04 +02:00
nom
4a6b575ce0 maj traefik 3.4.0 2025-05-08 18:50:45 +02:00
8d83a2716b correction du mail 2025-05-07 08:12:57 +02:00
nom
4807624dbc corrige url pour kazkouil 2025-05-02 13:05:42 +02:00
nom
b5aa7e9945 je remets sur le git la version de prod1 2025-05-02 11:55:40 +02:00
nom
8d0caad3c7 simplifie la conf prometheus 2025-05-02 11:47:09 +02:00
nom
87b007d4b9 ajout label pour cadvisor 2025-05-02 11:45:39 +02:00
7852e82e74 env cadvisor 2025-04-30 15:23:17 +02:00
9b92276fc1 settings cadvisor 2025-04-30 15:11:24 +02:00
nom
e39ce5518c corrige monitoring (cadvisor passe par traefik) 2025-04-29 23:36:14 +02:00
nom
ea6e48886d maj clean acme.json 2025-04-25 11:07:53 +02:00
4187f4b772 add cadvisor/prometheus/grafana + dashboards 2025-04-24 23:10:06 +02:00
nom
b00916ceba maj nettoye acme en prenant la bonne IP du srv 2025-04-24 14:35:44 +02:00
nom
f95b959bf2 maj nettoyement 2025-04-24 00:27:47 +02:00
nom
609b5c1d62 maj nettoyer_acme_json 2025-04-24 00:16:51 +02:00
nom
a6a20e0dea jq, c'est une tuerie ! 2025-04-24 00:03:30 +02:00
nom
821335e1ca init nettoyage acme.json des certifs LE pour traefik 2025-04-23 22:33:03 +02:00
nom
e31c75d8b1 upgrade MM en 10.7 2025-04-22 16:28:57 +02:00
nom
c041bac532 upgrade traefik 2025-04-21 09:05:18 +02:00
8eb33813d6 date du fichier 2025-04-20 17:57:31 +02:00
faf2e2bc8e add dyn DNS 2025-04-20 10:51:20 +02:00
adc0528c81 peertube 2025-04-20 09:34:17 +02:00
1259857474 add peertube 2025-04-19 17:10:33 +02:00
db684d4ebd sympa ssl 2025-04-19 16:59:09 +02:00
df657bb035 challenge acme traefik 2025-04-19 16:56:03 +02:00
5d8634c8df sympa traefik 2025-04-19 16:40:16 +02:00
c55e984918 Merge branch 'master' of ssh://git.kaz.bzh:2202/KAZ/KazV2 2025-04-19 14:23:14 +02:00
4b95553be0 certificats et webmail 2025-04-19 14:23:06 +02:00
1f8520db90 webmail 2025-04-19 13:49:22 +02:00
9de98c4021 correction bug des alias 2025-04-18 15:16:55 +02:00
85b8048aa9 certificats pour mail et listes 2025-04-18 13:36:44 +02:00
0bf808f0cf mails en minuscule 2025-04-16 19:32:01 +02:00
nom
1609e7725f ajoute AD account 2025-04-11 09:27:46 +02:00
nom
6bd95d1056 on passe de gandi à alwaysdata pour les dns ! 2025-04-11 09:25:05 +02:00
nom
07f8ef8151 maj dns_alwaysdata.sh 2025-04-06 00:38:16 +02:00
nom
aad57eafae maj dns AD 2025-04-05 21:19:36 +02:00
4370436c42 modif pour officeurl 2025-03-25 00:33:01 +01:00
79c52c2067 ajout mastodon 2025-03-24 19:33:04 +01:00
nom
d341122676 init api pour alwaysdata 2025-03-19 19:05:37 +01:00
nom
93a929d291 upgrade mm en 10.6 2025-03-19 10:44:09 +01:00
5d6e46bb37 volumes mastodon 2025-03-17 08:33:18 +01:00
545ed42968 volume images mastodon 2025-03-17 08:31:21 +01:00
53ba95b9d3 env mastodon 2025-03-15 10:26:57 +01:00
61f4629d1f ajout backup mastodon 2025-03-14 22:47:36 +01:00
b7bb45869a mastodon traefik streaming 2025-03-14 17:44:29 +01:00
888c614bdd doc mastodon 2025-03-14 17:37:17 +01:00
16683616c1 suite mastodon 2025-03-14 17:04:23 +01:00
c613184594 bootstrap mastodon 2025-03-14 16:58:02 +01:00
aaf3d9343e paheko fix gd/webp 2025-03-12 09:41:17 +01:00
nom
e8fdead666 workaround pour paheko install gd avec dompdf 2025-03-12 08:46:50 +01:00
b28c04928b ajout webmail 2025-03-10 22:23:54 +01:00
nom
286b2fa144 maj Mailserver 2025-03-10 21:52:15 +01:00
nom
6a7fd829e5 maj Dockerfile pour install gd 2025-03-10 21:48:14 +01:00
nom
5f20548e21 upgrade 2025-03-10 21:05:01 +01:00
161 changed files with 33632 additions and 1604 deletions

View File

@@ -16,7 +16,6 @@ KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
setKazVars
. "${DOCKERS_ENV}"
. "${KAZ_KEY_DIR}/SetAllPass.sh"
usage () {
echo $(basename "$0") " [-h] [-help] [-timestamp] template dst"
@@ -64,8 +63,8 @@ done
-e "s|__DOKUWIKI_HOST__|${dokuwikiHost}|g"\
-e "s|__DOMAIN__|${domain}|g"\
-e "s|__FILE_HOST__|${fileHost}|g"\
-e "s|__PAHEKO_API_PASSWORD__|${paheko_API_PASSWORD}|g"\
-e "s|__PAHEKO_API_USER__|${paheko_API_USER}|g"\
# -e "s|__PAHEKO_API_PASSWORD__|${paheko_API_PASSWORD}|g"\
# -e "s|__PAHEKO_API_USER__|${paheko_API_USER}|g"\
-e "s|__PAHEKO_HOST__|${pahekoHost}|g"\
-e "s|__GIT_HOST__|${gitHost}|g"\
-e "s|__GRAV_HOST__|${gravHost}|g"\
@@ -79,12 +78,13 @@ done
-e "s|__SMTP_HOST__|${smtpHost}|g"\
-e "s|__SYMPADB__|${sympaDBName}|g"\
-e "s|__SYMPA_HOST__|${sympaHost}|g"\
-e "s|__SYMPA_MYSQL_DATABASE__|${sympa_MYSQL_DATABASE}|g"\
-e "s|__SYMPA_MYSQL_PASSWORD__|${sympa_MYSQL_PASSWORD}|g"\
-e "s|__SYMPA_MYSQL_USER__|${sympa_MYSQL_USER}|g"\
# -e "s|__SYMPA_MYSQL_DATABASE__|${sympa_MYSQL_DATABASE}|g"\
# -e "s|__SYMPA_MYSQL_PASSWORD__|${sympa_MYSQL_PASSWORD}|g"\
# -e "s|__SYMPA_MYSQL_USER__|${sympa_MYSQL_USER}|g"\
-e "s|__VIGILO_HOST__|${vigiloHost}|g"\
-e "s|__WEBMAIL_HOST__|${webmailHost}|g"\
-e "s|__CASTOPOD_HOST__|${castopodHost}|g"\
-e "s|__SPIP_HOST__|${spipHost}|g"\
-e "s|__IMAPSYNC_HOST__|${imapsyncHost}|g"\
-e "s|__YAKFORMS_HOST__|${yakformsHost}|g"\
-e "s|__WORDPRESS_HOST__|${wordpressHost}|g"\

25
bin/certbot-dns-alwaysdata.sh Executable file
View File

@@ -0,0 +1,25 @@
#/bin/bash
# certbot certonly --manual --preferred-challenges=dns --manual-auth-hook certbot-dns-alwaysdata.sh --manual-cleanup-hook certbot-dns-alwaysdata.sh -d "*.kaz.bzh" -d "kaz.bzh"
export KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
. $KAZ_KEY_DIR/env-alwaysdata
DOMAIN_ID=$(curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" ${ALWAYSDATA_API}/domain/?name=${CERTBOT_DOMAIN} | jq '.[0].id')
add_record(){
RECORD_ID=$(curl -s -X POST -d "{\"domain\":\"${DOMAIN_ID}\", \"type\":\"TXT\", \"name\":\"_acme-challenge\", \"value\":\"${CERTBOT_VALIDATION}\"}" --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/")
}
del_record(){
RECORD_ID=$(curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/?name=_acme-challenge&type=TXT&domain=${DOMAIN_ID}" | jq ".[0].id")
curl -s -X DELETE --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/${RECORD_ID}/"
}
if [ -z ${CERTBOT_AUTH_OUTPUT} ]; then
add_record
else
del_record
fi

View File

@@ -6,8 +6,6 @@ setKazVars
RUN_PASS_DIR="secret"
TMPL_PASS_DIR="secret.tmpl"
RUN_PASS_FILE="${RUN_PASS_DIR}/SetAllPass.sh"
TMPL_PASS_FILE="${TMPL_PASS_DIR}/SetAllPass.sh"
NEED_GEN=
########################################
@@ -48,7 +46,12 @@ getVars () {
# get lvalues in script
getSettedVars () {
# $1 : filename
grep "^[^#]*=..*" $1 | grep -v '^[^#]*=".*--clean_val--.*"' | grep -v '^[^#]*="${' | sort -u
grep -E "^[^=#]*(USER|PASS|TOKEN|DATABASE|ACCOUNT|LOGIN|KEY)[^#]*=..*" ./* | grep -vE '^[^#=]*=.*@@(user|pass|db|token|gv|cv)@@.*' | sort -u
}
getUnsettedVars () {
# $1 : filename
grep -vE '^[^#=]*=.*@@(user|pass|db|token|gv|cv)@@.*' ./* | sort -u
}
getVarFormVal () {
@@ -57,60 +60,6 @@ getVarFormVal () {
grep "^[^#]*=$1" $2 | sed 's/\s*\([^=]*\).*/\1/'
}
########################################
# synchronized SetAllPass.sh (find missing lvalues)
updatePassFile () {
# $1 : ref filename
# $2 : target filename
REF_FILE="$1"
TARGET_FILE="$2"
NEED_UPDATE=
while : ; do
declare -a listRef listTarget missing
listRef=($(getVars "${REF_FILE}"))
listTarget=($(getVars "${TARGET_FILE}"))
missing=($(comm -23 <(printf "%s\n" ${listRef[@]}) <(printf "%s\n" ${listTarget[@]})))
if [ -n "${missing}" ]; then
echo "missing vars in ${YELLOW}${BOLD}${TARGET_FILE}${NC}:${RED}${BOLD}" ${missing[@]} "${NC}"
read -p "Do you want to add them? [y/n]: " yn
case $yn in
""|[Yy]*)
emacs "${REF_FILE}" "${TARGET_FILE}"
NEED_UPDATE=true
break
;;
[Nn]*)
break
;;
esac
else
break
fi
done
}
updatePassFile "${TMPL_PASS_FILE}" "${RUN_PASS_FILE}"
[ -n "${NEED_UPDATE}" ] && NEED_GEN=true
updatePassFile "${RUN_PASS_FILE}" "${TMPL_PASS_FILE}"
########################################
# check empty pass in TMPL_PASS_FILE
declare -a settedVars
settedVars=($(getSettedVars "${TMPL_PASS_FILE}"))
if [ -n "${settedVars}" ]; then
echo "unclear password in ${YELLOW}${BOLD}${TMPL_PASS_FILE}${NC}:${BLUE}${BOLD}"
for var in ${settedVars[@]}; do
echo -e "\t${var}"
done
echo "${NC}"
read -p "Do you want to clear them? [y/n]: " yn
case $yn in
""|[Yy]*)
emacs "${TMPL_PASS_FILE}"
;;
esac
fi
########################################
# check new files env-*
@@ -146,7 +95,7 @@ createMissingEnv "${TMPL_PASS_DIR}" "${RUN_PASS_DIR}"
declare -a listTmpl listRun listCommonFiles
listTmplFiles=($(cd "${TMPL_PASS_DIR}"; ls -1 env-* | grep -v '~$'))
listRunFiles=($(cd "${RUN_PASS_DIR}"; ls -1 env-* | grep -v '~$'))
listCommonFiles=($(comm -3 <(printf "%s\n" ${listTmplFiles[@]}) <(printf "%s\n" ${listRunFiles[@]})))
listCommonFiles=($(comm -12 <(printf "%s\n" ${listTmplFiles[@]}) <(printf "%s\n" ${listRunFiles[@]})))
for envFile in ${listCommonFiles[@]}; do
while : ; do
TMPL_FILE="${TMPL_PASS_DIR}/${envFile}"
@@ -224,21 +173,19 @@ if [ -n "${missing}" ]; then
fi
########################################
# check env-* in updateDockerPassword.sh
missing=($(for DIR in "${RUN_PASS_DIR}" "${TMPL_PASS_DIR}"; do
# check extention in dockers.env
declare -a missing
unsetted=($(for DIR in "${RUN_PASS_DIR}"; do
for envFile in $(ls -1 "${DIR}/"env-* | grep -v '~$'); do
val="${envFile#*env-}"
varName=$(getVarFormVal "${val}" "${DOCKERS_ENV}")
[ -z "${varName}" ] && continue
prefixe=$(grep "^\s*updateEnv.*${varName}" "${KAZ_BIN_DIR}/updateDockerPassword.sh" |
sed 's/\s*updateEnv[^"]*"\([^"]*\)".*/\1/' | sort -u)
if [ -z "${prefixe}" ]; then
echo "${envFile#*/}_(\${KAZ_KEY_DIR}/env-\${"${varName}"})"
if [ -z "${varName}" ]; then
echo "${val}"
fi
done
done | sort -u))
if [ -n "${missing}" ]; then
echo "missing update in ${GREEN}${BOLD}${KAZ_BIN_DIR}/updateDockerPassword.sh${NC}:${BLUE}${BOLD}"
echo "missing def in ${GREEN}${BOLD}${DOCKERS_ENV}${NC}:${BLUE}${BOLD}"
for var in ${missing[@]}; do
echo -e "\t${var}"
done
@@ -246,53 +193,17 @@ if [ -n "${missing}" ]; then
read -p "Do you want to add them? [y/n]: " yn
case $yn in
""|[Yy]*)
emacs "${KAZ_BIN_DIR}/updateDockerPassword.sh"
emacs "${DOCKERS_ENV}"
;;
esac
fi
########################################
# synchronized SetAllPass.sh and env-*
updateEnvFiles () {
# $1 secret dir
DIR=$1
listRef=($(getVars "${DIR}/SetAllPass.sh"))
missing=($(for envFile in $(ls -1 "${DIR}/"env-* | grep -v '~$'); do
val="${envFile#*env-}"
varName=$(getVarFormVal "${val}" "${DOCKERS_ENV}")
[ -z "${varName}" ] && continue
prefixe=$(grep "^\s*updateEnv.*${varName}" "${KAZ_BIN_DIR}/updateDockerPassword.sh" |
sed 's/\s*updateEnv[^"]*"\([^"]*\)".*/\1/' | sort -u)
[ -z "${prefixe}" ] && continue
listVarsInEnv=($(getVars "${envFile}"))
for var in ${listVarsInEnv[@]}; do
[[ ! " ${listRef[@]} " =~ " ${prefixe}_${var} " ]] && echo "${prefixe}_${var}"
done
# XXX doit exister dans SetAllPass.sh avec le prefixe
done))
if [ -n "${missing}" ]; then
echo "missing update in ${GREEN}${BOLD}${DIR}/SetAllPass.sh${NC}:${BLUE}${BOLD}"
for var in ${missing[@]}; do
echo -e "\t${var}"
done
echo "${NC}"
read -p "Do you want to add them? [y/n]: " yn
case $yn in
""|[Yy]*)
emacs "${DIR}/SetAllPass.sh"
;;
esac
fi
}
updateEnvFiles "${RUN_PASS_DIR}"
updateEnvFiles "${TMPL_PASS_DIR}"
# XXX chercher les variables non utilisées dans les SetAllPass.sh
if [ -n "${NEED_GEN}" ]; then
while : ; do
read -p "Do you want to generate blank values? [y/n]: " yn
read -p "Do you want to generate missing values? [y/n]: " yn
case $yn in
""|[Yy]*)
"${KAZ_BIN_DIR}/secretGen.sh"

View File

@@ -1,11 +0,0 @@
#!/bin/bash
KAZ_ROOT=$(cd $(dirname $0)/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
for filename in "${KAZ_KEY_DIR}/"env-*Serv "${KAZ_KEY_DIR}/"env-*DB; do
if grep -q "^[^#=]*=\s*$" "${filename}" 2>/dev/null; then
echo "${filename}"
fi
done

View File

@@ -8,6 +8,9 @@
# Did : 13 fevrier 2025 modif des save en postgres et mysql
# Did : ajout des sauvegardes de mobilizon et mattermost en postgres
# 20/04/2025
# Did : Ajout des sauvegardes de peertube dans les services generaux
# En cas d'absence de postfix, il faut lancer :
# docker network create postfix_mailNet
@@ -16,8 +19,7 @@
# sauvegarde la base de données d'un compose
# met à jours les paramètres de configuration du mandataire (proxy)
#KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
KAZ_ROOT=/kaz
KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
@@ -59,20 +61,6 @@ doCompose () {
${SIMU} ln -fs ../../config/dockers.env .env
fi
${SIMU} docker-compose $1
if [ "$2" = "cachet" ] && [ "$1" != "down" ]; then
NEW_KEY=$(cd "${KAZ_COMP_DIR}/$2" ; docker-compose logs | grep APP_KEY=base64: | sed "s/^.*'APP_KEY=\(base64:[^']*\)'.*$/\1/" | tail -1)
if [ -n "${NEW_KEY}" ]; then
printKazMsg "cachet key change"
# change key
${SIMU} sed -i \
-e 's%^\(\s*cachet_APP_KEY=\).*$%\1"'"${NEW_KEY}"'"%' \
"${KAZ_KEY_DIR}/SetAllPass.sh"
${SIMU} "${KAZ_BIN_DIR}/secretGen.sh"
# restart
${SIMU} docker-compose $1
fi
fi
}
doComposes () {
@@ -175,7 +163,6 @@ statusComposes () {
saveComposes () {
. "${DOCKERS_ENV}"
. "${KAZ_ROOT}/secret/SetAllPass.sh"
savedComposes+=( ${enableMailComposes[@]} )
savedComposes+=( ${enableProxyComposes[@]} )
@@ -193,41 +180,59 @@ saveComposes () {
;;
sympa)
echo "save sympa"
saveDB ${sympaDBName} "${sympa_MYSQL_USER}" "${sympa_MYSQL_PASSWORD}" "${sympa_MYSQL_DATABASE}" sympa mysql
. $KAZ_KEY_DIR/env-sympaDB
saveDB ${sympaDBName} "${DB_MYSQL_USER}" "${DB_MYSQL_PASSWORD}" "${DB_MYSQL_DATABASE}" sympa mysql
;;
web)
# rien à faire (fichiers)
;;
etherpad)
echo "save pad"
saveDB ${etherpadDBName} "${etherpad_MYSQL_USER}" "${etherpad_MYSQL_PASSWORD}" "${etherpad_MYSQL_DATABASE}" etherpad mysql
. $KAZ_KEY_DIR/env-etherpadDB
saveDB ${etherpadDBName} "${DB_MYSQL_USER}" "${DB_MYSQL_PASSWORD}" "${DB_MYSQL_DATABASE}" etherpad mysql
;;
framadate)
echo "save date"
saveDB ${framadateDBName} "${framadate_MYSQL_USER}" "${framadate_MYSQL_PASSWORD}" "${framadate_MYSQL_DATABASE}" framadate mysql
. $KAZ_KEY_DIR/env-framadateDB
saveDB ${framadateDBName} "${DB_MYSQL_USER}" "${DB_MYSQL_PASSWORD}" "${DB_MYSQL_DATABASE}" framadate mysql
;;
cloud)
echo "save cloud"
saveDB ${nextcloudDBName} "${nextcloud_MYSQL_USER}" "${nextcloud_MYSQL_PASSWORD}" "${nextcloud_MYSQL_DATABASE}" nextcloud mysql
. $KAZ_KEY_DIR/env-nextcloudDB
saveDB ${nextcloudDBName} "${DB_MYSQL_USER}" "${DB_MYSQL_PASSWORD}" "${DB_MYSQL_DATABASE}" nextcloud mysql
;;
paheko)
# rien à faire (fichiers)
;;
mattermost)
echo "save mattermost"
saveDB matterPG "${mattermost_POSTGRES_USER}" "${mattermost_POSTGRES_PASSWORD}" "${mattermost_POSTGRES_DB}" mattermost postgres
. $KAZ_KEY_DIR/env-mattermostDB
saveDB matterPG "${DB_POSTGRES_USER}" "${DB_POSTGRES_PASSWORD}" "${DB_POSTGRES_DB}" mattermost postgres
;;
mobilizon)
echo "save mobilizon"
saveDB ${mobilizonDBName} "${mobilizon_POSTGRES_USER}" "${mobilizon_POSTGRES_PASSWORD}" "${mobilizon_POSTGRES_DB}" mobilizon postgres
. $KAZ_KEY_DIR/env-mobilizonDB
saveDB ${mobilizonDBName} "${DB_POSTGRES_USER}" "${DB_POSTGRES_PASSWORD}" "${DB_POSTGRES_DB}" mobilizon postgres
;;
peertube)
echo "save peertube"
. $KAZ_KEY_DIR/env-peertubeDB
saveDB ${peertubeDBName} "${DB_POSTGRES_USER}" "${DB_POSTGRES_PASSWORD}" "${DB_PEERTUBE_DB_HOSTNAME}" peertube postgres
;;
mastodon)
echo "save mastodon"
. $KAZ_KEY_DIR/env-mastodonDB
saveDB ${mastodonDBName} "${DB_POSTGRES_USER}" "${DB_POSTGRES_PASSWORD}" "${DB_POSTGRES_DB}" mastodon postgres
;;
roundcube)
echo "save roundcube"
saveDB ${roundcubeDBName} "${roundcube_MYSQL_USER}" "${roundcube_MYSQL_PASSWORD}" "${roundcube_MYSQL_DATABASE}" roundcube mysql
. $KAZ_KEY_DIR/env-roundcubeDB
saveDB ${roundcubeDBName} "${DB_MYSQL_USER}" "${DB_MYSQL_PASSWORD}" "${DB_MYSQL_DATABASE}" roundcube mysql
;;
vaultwarden)
echo "save vaultwarden"
saveDB ${vaultwardenDBName} "${vaultwarden_MYSQL_USER}" "${vaultwarden_MYSQL_PASSWORD}" "${vaultwarden_MYSQL_DATABASE}" vaultwarden mysql
. $KAZ_KEY_DIR/env-vaultwardenDB
saveDB ${vaultwardenDBName} "${DB_MYSQL_USER}" "${DB_MYSQL_PASSWORD}" "${DB_MYSQL_DATABASE}" vaultwarden mysql
;;
dokuwiki)
# rien à faire (fichiers)
@@ -237,15 +242,23 @@ saveComposes () {
echo "save ${ORGA}"
if grep -q "cloud:" "${KAZ_COMP_DIR}/${compose}/docker-compose.yml" 2> /dev/null ; then
echo " => cloud"
saveDB "${ORGA}-DB" "${nextcloud_MYSQL_USER}" "${nextcloud_MYSQL_PASSWORD}" "${nextcloud_MYSQL_DATABASE}" "${ORGA}-cloud" mysql
. $KAZ_KEY_DIR/orgas/$ORGA/env-nextcloudDB
saveDB "${ORGA}-DB" "${MYSQL_USER}" "${MYSQL_PASSWORD}" "${MYSQL_DATABASE}" "${ORGA}-cloud" mysql
fi
if grep -q "agora:" "${KAZ_COMP_DIR}/${compose}/docker-compose.yml" 2> /dev/null ; then
echo " => mattermost"
saveDB "${ORGA}-DB" "${mattermost_MYSQL_USER}" "${mattermost_MYSQL_PASSWORD}" "${mattermost_MYSQL_DATABASE}" "${ORGA}-mattermost" mysql
. $KAZ_KEY_DIR/orgas/$ORGA/env-mattermostDB
saveDB "${ORGA}-DB" "${MYSQL_USER}" "${MYSQL_PASSWORD}" "${MYSQL_DATABASE}" "${ORGA}-mattermost" mysql
fi
if grep -q "wordpress:" "${KAZ_COMP_DIR}/${compose}/docker-compose.yml" 2> /dev/null ; then
echo " => wordpress"
saveDB "${ORGA}-DB" "${wp_MYSQL_USER}" "${wp_MYSQL_PASSWORD}" "${wp_MYSQL_DATABASE}" "${ORGA}-wordpress" mysql
. $KAZ_KEY_DIR/orgas/$ORGA/env-wpDB
saveDB "${ORGA}-DB" "${MYSQL_USER}" "${MYSQL_PASSWORD}" "${MYSQL_DATABASE}" "${ORGA}-wordpress" mysql
fi
if grep -q "spip:" "${KAZ_COMP_DIR}/${compose}/docker-compose.yml" 2> /dev/null ; then
echo " => spip"
. $KAZ_KEY_DIR/orgas/$ORGA/env-spipDB
saveDB "${ORGA}-DB" "${MYSQL_USER}" "${MYSQL_PASSWORD}" "${MYSQL_DATABASE}" "${ORGA}-spip" mysql
fi
;;
esac

82
bin/createDBUsers.sh Executable file
View File

@@ -0,0 +1,82 @@
#!/bin/bash
KAZ_ROOT=$(cd $(dirname $0)/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
# pour mise au point
# SIMU=echo
# Améliorations à prévoir
# - donner en paramètre les services concernés (pour limité les modifications)
# - pour les DB si on déclare un nouveau login, alors les privilèges sont créé mais les anciens pas révoqués
. "${DOCKERS_ENV}"
createMysqlUser(){
# $1 = envName
# $2 = containerName of DB
. $KAZ_KEY_DIR/env-$1
# seulement si pas de mdp pour root
# pb oeuf et poule (il faudrait les anciennes valeurs) :
# * si rootPass change, faire à la main
# * si dbName change, faire à la main
checkDockerRunning "$2" "$2" || return
echo "change DB pass on docker $2"
echo "grant all privileges on ${MYSQL_DATABASE}.* to '${MYSQL_USER}' identified by '${MYSQL_PASSWORD}';" | \
docker exec -i $2 bash -c "mysql --user=root --password=${MYSQL_ROOT_PASSWORD}"
}
framadateUpdate(){
[[ "${COMP_ENABLE}" =~ " framadate " ]] || return
if [ ! -f "${DOCK_LIB}/volumes/framadate_dateConfig/_data/config.php" ]; then
return 0
fi
. $KAZ_KEY_DIR/env-framadateDB
. $KAZ_KEY_DIR/env-framadateServ
checkDockerRunning "${framadateServName}" "Framadate" &&
${SIMU} docker exec -ti "${framadateServName}" bash -c -i "htpasswd -bc /var/framadate/admin/.htpasswd ${HTTPD_USER} ${HTTPD_PASSWORD}"
${SIMU} sed -i \
-e "s/^#*const DB_USER[ ]*=.*$/const DB_USER= '${DB_MYSQL_USER}';/g" \
-e "s/^#*const DB_PASSWORD[ ]*=.*$/const DB_PASSWORD= '${DB_MYSQL_PASSWORD}';/g" \
"${DOCK_LIB}/volumes/framadate_dateConfig/_data/config.php"
}
jirafeauUpdate(){
[[ "${COMP_ENABLE}" =~ " jirafeau " ]] || return
if [ ! -f "${DOCK_LIB}/volumes/jirafeau_fileConfig/_data/config.local.php" ]; then
return 0
fi
. $KAZ_KEY_DIR/env-jirafeauServ
SHA=$(echo -n "${_HTTPD_PASSWORD}" | sha256sum | cut -d \ -f 1)
${SIMU} sed -i \
-e "s/'admin_password'[ ]*=>[ ]*'[^']*'/'admin_password' => '${SHA}'/g" \
"${DOCK_LIB}/volumes/jirafeau_fileConfig/_data/config.local.php"
}
####################
# main
createMysqlUser "etherpadDB" "${etherpadDBName}"
createMysqlUser "framadateDB" "${framadateDBName}"
createMysqlUser "giteaDB" "${gitDBName}"
createMysqlUser "mattermostDB" "${mattermostDBName}"
createMysqlUser "nextcloudDB" "${nextcloudDBName}"
createMysqlUser "roundcubeDB" "${roundcubeDBName}"
createMysqlUser "sympaDB" "${sympaDBName}"
createMysqlUser "vigiloDB" "${vigiloDBName}"
createMysqlUser "wpDB" "${wordpressDBName}"
createMysqlUser "vaultwardenDB" "${vaultwardenDBName}"
createMysqlUser "castopodDB" "${castopodDBName}"
createMysqlUser "spipDB" "${spipDBName}"
createMysqlUser "mastodonDB" "${mastodonDBName}"
framadateUpdate
jirafeauUpdate
exit 0

View File

@@ -1,104 +0,0 @@
#!/bin/bash
cd $(dirname $0)/..
mkdir -p emptySecret
rsync -aHAX --info=progress2 --delete secret/ emptySecret/
cd emptySecret/
. ../config/dockers.env
. ./SetAllPass.sh
# pour mise au point
# SIMU=echo
cleanEnvDB(){
# $1 = prefix
# $2 = envName
# $3 = containerName of DB
rootPass="--root_password--"
dbName="--database_name--"
userName="--user_name--"
userPass="--user_password--"
${SIMU} sed -i \
-e "s/MYSQL_ROOT_PASSWORD=.*/MYSQL_ROOT_PASSWORD=${rootPass}/g" \
-e "s/MYSQL_DATABASE=.*/MYSQL_DATABASE=${dbName}/g" \
-e "s/MYSQL_USER=.*/MYSQL_USER=${userName}/g" \
-e "s/MYSQL_PASSWORD=.*/MYSQL_PASSWORD=${userPass}/g" \
"$2"
}
cleanEnv(){
# $1 = prefix
# $2 = envName
for varName in $(grep "^[a-zA-Z_]*=" $2 | sed "s/^\([^=]*\)=.*/\1/g")
do
srcName="$1_${varName}"
srcVal="--clean_val--"
${SIMU} sed -i \
-e "s~^[ ]*${varName}=.*$~${varName}=${srcVal}~" \
"$2"
done
}
cleanPasswd(){
${SIMU} sed -i \
-e 's/^\([# ]*[^#= ]*\)=".[^{][^"]*"/\1="--clean_val--"/g' \
./SetAllPass.sh
}
####################
# main
# read -r -p "Do you want to remove all password? [Y/n] " input
# case $input in
# [yY][eE][sS]|[yY])
# echo "Remove all password"
# ;;
# [nN][oO]|[nN])
# echo "Abort"
# ;;
# *)
# echo "Invalid input..."
# exit 1
# ;;
# esac
cleanPasswd
cleanEnvDB "etherpad" "./env-${etherpadDBName}" "${etherpadDBName}"
cleanEnvDB "framadate" "./env-${framadateDBName}" "${framadateDBName}"
cleanEnvDB "git" "./env-${gitDBName}" "${gitDBName}"
cleanEnvDB "mattermost" "./env-${mattermostDBName}" "${mattermostDBName}"
cleanEnvDB "nextcloud" "./env-${nextcloudDBName}" "${nextcloudDBName}"
cleanEnvDB "roundcube" "./env-${roundcubeDBName}" "${roundcubeDBName}"
cleanEnvDB "sso" "./env-${ssoDBName}" "${ssoDBName}"
cleanEnvDB "sympa" "./env-${sympaDBName}" "${sympaDBName}"
cleanEnvDB "vigilo" "./env-${vigiloDBName}" "${vigiloDBName}"
cleanEnvDB "wp" "./env-${wordpressDBName}" "${wordpressDBName}"
cleanEnv "etherpad" "./env-${etherpadServName}"
cleanEnv "gandi" "./env-gandi"
cleanEnv "jirafeau" "./env-${jirafeauServName}"
cleanEnv "mattermost" "./env-${mattermostServName}"
cleanEnv "nextcloud" "./env-${nextcloudServName}"
cleanEnv "office" "./env-${officeServName}"
cleanEnv "roundcube" "./env-${roundcubeServName}"
cleanEnv "sso" "./env-${ssoServName}"
cleanEnv "vigilo" "./env-${vigiloServName}"
cleanEnv "wp" "./env-${wordpressServName}"
cat > allow_admin_ip <<EOF
# ip for admin access only
# local test
allow 127.0.0.0/8;
allow 192.168.0.0/16;
EOF
chmod -R go= .
chmod -R +X .

View File

@@ -3,14 +3,13 @@
cd $(dirname $0)
./setOwner.sh
./createEmptyPasswd.sh
cd ../..
FILE_NAME="/tmp/$(date +'%Y%M%d')-KAZ.tar.bz2"
FILE_NAME="/tmp/$(date +'%Y%m%d')-KAZ.tar.bz2"
tar -cjf "${FILE_NAME}" --transform s/emptySecret/secret/ \
./kaz/emptySecret/ ./kaz/bin ./kaz/config ./kaz/dockers
tar -cjf "${FILE_NAME}" --transform s/secret.tmpl/secret/ \
./kaz/secret.tmpl/ ./kaz/bin ./kaz/config ./kaz/dockers
ls -l "${FILE_NAME}"

5
bin/createUser.py Executable file
View File

@@ -0,0 +1,5 @@
#!/usr/bin/python3
from lib.user import create_users_from_file
create_users_from_file()

View File

@@ -37,12 +37,14 @@ setKazVars
cd "${KAZ_ROOT}"
. "${DOCKERS_ENV}"
. "${KAZ_KEY_DIR}/SetAllPass.sh"
. $KAZ_KEY_DIR/env-ldapServ
. $KAZ_KEY_DIR/env-sympaServ
. $KAZ_KEY_DIR/env-paheko
# DOCK_DIR="${KAZ_COMP_DIR}" # ???
SETUP_MAIL="docker exec -ti mailServ setup"
# on détermine le script appelant, le fichier log et le fichier source, tous issus de la même racine
PRG=$(basename $0)
RACINE=${PRG%.sh}
@@ -73,7 +75,7 @@ URL_LISTE="${sympaHost}.${domain}"
URL_AGORA="${matterHost}.${domain}"
URL_MDP="${ldapUIHost}.${domain}"
# URL_PAHEKO="kaz-${pahekoHost}.${domain}"
URL_PAHEKO="${httpProto}://${paheko_API_USER}:${paheko_API_PASSWORD}@kaz-paheko.${domain}"
URL_PAHEKO="${httpProto}://${API_USER}:${API_PASSWORD}@kaz-paheko.${domain}"
availableProxyComposes=($(getList "${KAZ_CONF_DIR}/container-proxy.list"))
@@ -208,16 +210,7 @@ for i in "${CMD_LOGIN}" "${CMD_SYMPA}" "${CMD_ORGA}" "${CMD_PROXY}" "${CMD_FIRST
done
echo "numero,nom,quota_disque,action_auto" > "${TEMP_PAHEKO}"
echo "curl \"https://${paheko_API_USER}:${paheko_API_PASSWORD}@kaz-paheko.kaz.bzh/api/user/import\" -T \"${TEMP_PAHEKO}\"" >> "${CMD_PAHEKO}"
#echo "récupération des login postfix... "
## on stocke les emails et les alias KAZ déjà créés
#(
# ${SETUP_MAIL} email list
# ${SETUP_MAIL} alias list
#) | cut -d ' ' -f 2 | grep @ | sort > "${TFILE_EMAIL}"
# did on supprime le ^M en fin de fichier pour pas faire planter les grep
#dos2unix "${TFILE_EMAIL}"
echo "curl \"https://${API_USER}:${API_PASSWORD}@kaz-paheko.kaz.bzh/api/user/import\" -T \"${TEMP_PAHEKO}\"" >> "${CMD_PAHEKO}"
echo "on récupère tous les emails (secours/alias/kaz) sur le ldap"
FILE_LDIF=/home/sauve/ldap.ldif
@@ -226,13 +219,14 @@ gunzip ${FILE_LDIF}.gz -f
grep -aEiorh '([[:alnum:]]+([._-][[:alnum:]]+)*@[[:alnum:]]+([._-][[:alnum:]]+)*\.[[:alpha:]]{2,6})' ${FILE_LDIF} | sort -u > ${TFILE_EMAIL}
echo "récupération des login mattermost... "
docker exec -ti mattermostServ bin/mmctl user list --all | grep ":.*(" | cut -d ':' -f 2 | cut -d ' ' -f 2 | sort > "${TFILE_MM}"
docker exec -i mattermostServ bin/mmctl user list --all | grep ":.*(" | cut -d ':' -f 2 | cut -d ' ' -f 2 | sort > "${TFILE_MM}"
dos2unix "${TFILE_MM}"
echo "done"
# se connecter à l'agora pour ensuite pouvoir passer toutes les commandes mmctl
echo "docker exec -ti mattermostServ bin/mmctl auth login ${httpProto}://${URL_AGORA} --name local-server --username ${mattermost_user} --password ${mattermost_pass}" | tee -a "${CMD_INIT}"
. $KAZ_KEY_DIR/env-mattermostAdmin
echo "docker exec -i mattermostServ bin/mmctl auth login ${httpProto}://${URL_AGORA} --name local-server --username ${mattermost_user} --password ${mattermost_pass}" | tee -a "${CMD_INIT}"
# vérif des emails
regex="^(([A-Za-z0-9]+((\.|\-|\_|\+)?[A-Za-z0-9]?)*[A-Za-z0-9]+)|[A-Za-z0-9]+)@(([A-Za-z0-9]+)+((\.|\-|\_)?([A-Za-z0-9]+)+)*)+\.([A-Za-z]{2,})+$"
@@ -379,8 +373,6 @@ while read ligne; do
else
SEND_MSG_CREATE=true
echo "${EMAIL_SOUHAITE} n'existe pas" | tee -a "${LOG}"
echo "${SETUP_MAIL} email add ${EMAIL_SOUHAITE} ${PASSWORD}" | tee -a "${CMD_LOGIN}"
echo "${SETUP_MAIL} quota set ${EMAIL_SOUHAITE} ${QUOTA}G" | tee -a "${CMD_LOGIN}"
# LDAP, à tester
user=$(echo ${EMAIL_SOUHAITE} | awk -F '@' '{print $1}')
domain=$(echo ${EMAIL_SOUHAITE} | awk -F '@' '{print $2}')
@@ -406,9 +398,9 @@ nextcloudEnabled: TRUE\n\
nextcloudQuota: ${QUOTA} GB\n\
mobilizonEnabled: TRUE\n\
agoraEnabled: TRUE\n\
userPassword: {CRYPT}${pass}\n\n' | ldapmodify -c -H ldap://${LDAP_IP} -D \"cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}\" -x -w ${ldap_LDAP_ADMIN_PASSWORD}" | tee -a "${CMD_LOGIN}"
userPassword: {CRYPT}${pass}\n\n' | ldapmodify -c -H ldap://${LDAP_IP} -D \"cn=${LDAP_ADMIN_USERNAME},${ldap_root}\" -x -w ${LDAP_ADMIN_PASSWORD}" | tee -a "${CMD_LOGIN}"
fi
#userPassword: {CRYPT}\$6\$${pass}\n\n\" | ldapmodify -c -H ldap://${LDAP_IP} -D \"cn=${ldap_LDAP_CONFIG_ADMIN_USERNAME},${ldap_root}\" -x -w ${ldap_LDAP_CONFIG_ADMIN_PASSWORD}" | tee -a "${CMD_LOGIN}"
#userPassword: {CRYPT}\$6\$${pass}\n\n\" | ldapmodify -c -H ldap://${LDAP_IP} -D \"cn=${LDAP_CONFIG_ADMIN_USERNAME},${ldap_root}\" -x -w ${LDAP_CONFIG_ADMIN_PASSWORD}" | tee -a "${CMD_LOGIN}"
CREATE_ORGA_SERVICES=""
@@ -437,15 +429,16 @@ userPassword: {CRYPT}${pass}\n\n' | ldapmodify -c -H ldap://${LDAP_IP} -D \"cn=$
MESSAGE_MAIL_ORGA_1="${MESSAGE_MAIL_ORGA_1}${NL}* un bureau virtuel pour stocker des fichiers/calendriers/contacts et partager avec vos connaissances : ${httpProto}://${URL_NC}"
# le user existe t-il déjà sur NC ?
curl -o "${TEMP_USER_NC}" -X GET -H 'OCS-APIRequest:true' "${httpProto}://admin:${nextcloud_NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/users?search=${IDENT_KAZ}"
. $KAZ_KEY_DIR/env-nextcloudServ
curl -o "${TEMP_USER_NC}" -X GET -H 'OCS-APIRequest:true' "${httpProto}://${NEXTCLOUD_ADMIN_USER}:${NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/users?search=${IDENT_KAZ}"
if grep -q "<element>${IDENT_KAZ}</element>" "${TEMP_USER_NC}"; then
echo "${IDENT_KAZ} existe déjà sur ${URL_NC}" | tee -a "${LOG}"
else
# on créé l'utilisateur sur NC sauf si c'est le NC général, on ne créé jamais l'utilisateur7
if [ ${URL_NC} != "${cloudHost}.${domain}" ]; then
echo "curl -X POST -H 'OCS-APIRequest:true' ${httpProto}://admin:${nextcloud_NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/users \
. $KAZ_KEY_DIR/orgas/$ORGA/env-nextcloudServ
echo "curl -X POST -H 'OCS-APIRequest:true' ${httpProto}://${NEXTCLOUD_ADMIN_USER}:${NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/users \
-d userid='${IDENT_KAZ}' \
-d displayName='${PRENOM} ${NOM}' \
-d password='${PASSWORD}' \
@@ -458,19 +451,22 @@ userPassword: {CRYPT}${pass}\n\n' | ldapmodify -c -H ldap://${LDAP_IP} -D \"cn=$
# s'il est admin de son orga, on le met admin
if [ "${service[ADMIN_ORGA]}" == "O" -a "${ORGA}" != "" -a "${service[NC_ORGA]}" == "O" ]; then
echo "curl -X POST -H 'OCS-APIRequest:true' ${httpProto}://${nextcloud_NEXTCLOUD_ADMIN_USER}:${nextcloud_NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/users/${IDENT_KAZ}/groups -d groupid='admin'" | tee -a "${CMD_INIT}"
. $KAZ_KEY_DIR/orgas/$ORGA/env-nextcloudServ
echo "curl -X POST -H 'OCS-APIRequest:true' ${httpProto}://${NEXTCLOUD_ADMIN_USER}:${NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/users/${IDENT_KAZ}/groups -d groupid='admin'" | tee -a "${CMD_INIT}"
fi
# faut-il mettre le user NC dans un groupe particulier sur le NC de base ?
if [ "${GROUPE_NC_BASE}" != "" -a "${service[NC_BASE]}" == "O" ]; then
# ici on travaille à nouveau sur le NC commun, donc on rechoppe les bons mdp
. $KAZ_KEY_DIR/env-nextcloudServ
# le groupe existe t-il déjà ?
curl -o "${TEMP_GROUP_NC}" -X GET -H 'OCS-APIRequest:true' "${httpProto}://admin:${nextcloud_NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/groups?search=${GROUPE_NC_BASE}"
curl -o "${TEMP_GROUP_NC}" -X GET -H 'OCS-APIRequest:true' "${httpProto}://${NEXTCLOUD_ADMIN_USER}:${NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/groups?search=${GROUPE_NC_BASE}"
nb=$(grep "<element>${GROUPE_NC_BASE}</element>" "${TEMP_GROUP_NC}" | wc -l)
if [ "${nb}" == "0" ];then
echo "curl -X POST -H 'OCS-APIRequest:true' ${httpProto}://admin:${nextcloud_NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/groups -d groupid=${GROUPE_NC_BASE}" | tee -a "${CMD_INIT}"
echo "curl -X POST -H 'OCS-APIRequest:true' ${httpProto}://${NEXTCLOUD_ADMIN_USER}:${NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/groups -d groupid=${GROUPE_NC_BASE}" | tee -a "${CMD_INIT}"
fi
# puis attacher le user au groupe
echo "curl -X POST -H 'OCS-APIRequest:true' ${httpProto}://admin:${nextcloud_NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/users/${IDENT_KAZ}/groups -d groupid=${GROUPE_NC_BASE}" | tee -a "${CMD_INIT}"
echo "curl -X POST -H 'OCS-APIRequest:true' ${httpProto}://${NEXTCLOUD_ADMIN_USER}:${NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/users/${IDENT_KAZ}/groups -d groupid=${GROUPE_NC_BASE}" | tee -a "${CMD_INIT}"
fi
fi
@@ -496,7 +492,8 @@ userPassword: {CRYPT}${pass}\n\n' | ldapmodify -c -H ldap://${LDAP_IP} -D \"cn=$
# TODO : vérif existance user
# # le user existe t-il déjà sur le wp ?
# curl -o "${TEMP_USER_WP}" -X GET "${httpProto}://${wp_WORDPRESS_ADMIN_USER}:${wp_WORDPRESS_ADMIN_PASSWORD}@${URL_WP_ORGA}/ocs/v1.php/cloud/users?search=${IDENT_KAZ}"
# . $KAZ_KEY_DIR/env-wpServ
# curl -o "${TEMP_USER_WP}" -X GET "${httpProto}://${WORDPRESS_ADMIN_USER}:${WORDPRESS_ADMIN_PASSWORD}@${URL_WP_ORGA}/ocs/v1.php/cloud/users?search=${IDENT_KAZ}"
# nb_user_wp_orga=$(grep "<element>${IDENT_KAZ}</element>" "${TEMP_USER_WP}" | wc -l)
# if [ "${nb_user_wp_orga}" != "0" ];then
# (
@@ -514,7 +511,7 @@ userPassword: {CRYPT}${pass}\n\n' | ldapmodify -c -H ldap://${LDAP_IP} -D \"cn=$
# ) | tee -a "${LOG}"
#
# # on supprime l'utilisateur sur NC.
# echo "curl -X DELETE -H 'OCS-APIRequest:true' ${httpProto}://admin:${nextcloud_NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/users \
# echo "curl -X DELETE -H 'OCS-APIRequest:true' ${httpProto}://admin:${NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/users \
# -d userid='${IDENT_KAZ}' \
# " | tee -a "${CMD_INIT}"
# fi
@@ -597,11 +594,11 @@ userPassword: {CRYPT}${pass}\n\n' | ldapmodify -c -H ldap://${LDAP_IP} -D \"cn=$
echo "${IDENT_KAZ} existe déjà sur mattermost" | tee -a "${LOG}"
else
# on créé le compte mattermost
echo "docker exec -ti mattermostServ bin/mmctl user create --email ${EMAIL_SOUHAITE} --username ${IDENT_KAZ} --password ${PASSWORD}" | tee -a "${CMD_LOGIN}"
echo "docker exec -i mattermostServ bin/mmctl user create --email ${EMAIL_SOUHAITE} --username ${IDENT_KAZ} --password ${PASSWORD}" | tee -a "${CMD_LOGIN}"
# et enfin on ajoute toujours le user à l'équipe KAZ et aux 2 channels publiques
echo "docker exec -ti mattermostServ bin/mmctl team users add kaz ${EMAIL_SOUHAITE}" | tee -a "${CMD_LOGIN}"
echo "docker exec -ti mattermostServ bin/mmctl channel users add kaz:une-question--un-soucis ${EMAIL_SOUHAITE}" | tee -a "${CMD_LOGIN}"
echo "docker exec -ti mattermostServ bin/mmctl channel users add kaz:cafe-du-commerce--ouvert-2424h ${EMAIL_SOUHAITE}" | tee -a "${CMD_LOGIN}"
echo "docker exec -i mattermostServ bin/mmctl team users add kaz ${EMAIL_SOUHAITE}" | tee -a "${CMD_LOGIN}"
echo "docker exec -i mattermostServ bin/mmctl channel users add kaz:une-question--un-soucis ${EMAIL_SOUHAITE}" | tee -a "${CMD_LOGIN}"
echo "docker exec -i mattermostServ bin/mmctl channel users add kaz:cafe-du-commerce--ouvert-2424h ${EMAIL_SOUHAITE}" | tee -a "${CMD_LOGIN}"
NB_SERVICES_BASE=$((NB_SERVICES_BASE+1))
fi
@@ -609,10 +606,10 @@ userPassword: {CRYPT}${pass}\n\n' | ldapmodify -c -H ldap://${LDAP_IP} -D \"cn=$
# l'équipe existe t-elle déjà ?
nb=$(docker exec mattermostServ bin/mmctl team list | grep -w "${EQUIPE_AGORA}" | wc -l)
if [ "${nb}" == "0" ];then # non, on la créé en mettant le user en admin de l'équipe
echo "docker exec -ti mattermostServ bin/mmctl team create --name ${EQUIPE_AGORA} --display_name ${EQUIPE_AGORA} --email ${EMAIL_SOUHAITE}" --private | tee -a "${CMD_INIT}"
echo "docker exec -i mattermostServ bin/mmctl team create --name ${EQUIPE_AGORA} --display_name ${EQUIPE_AGORA} --email ${EMAIL_SOUHAITE}" --private | tee -a "${CMD_INIT}"
fi
# puis ajouter le user à l'équipe
echo "docker exec -ti mattermostServ bin/mmctl team users add ${EQUIPE_AGORA} ${EMAIL_SOUHAITE}" | tee -a "${CMD_INIT}"
echo "docker exec -i mattermostServ bin/mmctl team users add ${EQUIPE_AGORA} ${EMAIL_SOUHAITE}" | tee -a "${CMD_INIT}"
fi
if [ -n "${CREATE_ORGA_SERVICES}" ]; then
@@ -629,16 +626,16 @@ userPassword: {CRYPT}${pass}\n\n' | ldapmodify -c -H ldap://${LDAP_IP} -D \"cn=$
# TODO : utiliser liste sur dev également
# on inscrit le user sur sympa, à la liste infos@${domain_sympa}
# docker exec -ti sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=https://listes.kaz.sns/sympasoap --trusted_application=SOAP_USER --trusted_application_password=SOAP_PASSWORD --proxy_vars="USER_EMAIL=contact1@kaz.sns" --service=which
# docker exec -i sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=https://listes.kaz.sns/sympasoap --trusted_application=SOAP_USER --trusted_application_password=SOAP_PASSWORD --proxy_vars="USER_EMAIL=contact1@kaz.sns" --service=which
if [[ "${mode}" = "dev" ]]; then
echo "# DEV, on teste l'inscription à sympa"| tee -a "${CMD_SYMPA}"
LISTMASTER=$(echo ${sympa_LISTMASTERS} | cut -d',' -f1)
echo "docker exec -ti sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${sympa_SOAP_USER} --trusted_application_password=${sympa_SOAP_PASSWORD} --proxy_vars=\"USER_EMAIL=${LISTMASTER}\" --service=add --service_parameters=\"${NL_LIST},${EMAIL_SOUHAITE}\"" | tee -a "${CMD_SYMPA}"
LISTMASTER=$(echo ${LISTMASTERS} | cut -d',' -f1)
echo "docker exec -i sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${SOAP_USER} --trusted_application_password=${SOAP_PASSWORD} --proxy_vars=\"USER_EMAIL=${LISTMASTER}\" --service=add --service_parameters=\"${NL_LIST},${EMAIL_SOUHAITE}\"" | tee -a "${CMD_SYMPA}"
else
echo "# PROD, on inscrit à sympa"| tee -a "${CMD_SYMPA}"
LISTMASTER=$(echo ${sympa_LISTMASTERS} | cut -d',' -f1)
echo "docker exec -ti sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${sympa_SOAP_USER} --trusted_application_password=${sympa_SOAP_PASSWORD} --proxy_vars=\"USER_EMAIL=${LISTMASTER}\" --service=add --service_parameters=\"${NL_LIST},${EMAIL_SOUHAITE}\"" | tee -a "${CMD_SYMPA}"
echo "docker exec -ti sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${sympa_SOAP_USER} --trusted_application_password=${sympa_SOAP_PASSWORD} --proxy_vars=\"USER_EMAIL=${LISTMASTER}\" --service=add --service_parameters=\"${NL_LIST},${EMAIL_SECOURS}\"" | tee -a "${CMD_SYMPA}"
LISTMASTER=$(echo ${LISTMASTERS} | cut -d',' -f1)
echo "docker exec -i sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${SOAP_USER} --trusted_application_password=${SOAP_PASSWORD} --proxy_vars=\"USER_EMAIL=${LISTMASTER}\" --service=add --service_parameters=\"${NL_LIST},${EMAIL_SOUHAITE}\"" | tee -a "${CMD_SYMPA}"
echo "docker exec -i sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${SOAP_USER} --trusted_application_password=${SOAP_PASSWORD} --proxy_vars=\"USER_EMAIL=${LISTMASTER}\" --service=add --service_parameters=\"${NL_LIST},${EMAIL_SECOURS}\"" | tee -a "${CMD_SYMPA}"
fi
if [ "${service[ADMIN_ORGA]}" == "O" ]; then
@@ -650,7 +647,7 @@ userPassword: {CRYPT}${pass}\n\n' | ldapmodify -c -H ldap://${LDAP_IP} -D \"cn=$
###################
# TODO : problème si 2 comptes partagent le même email souhaité (cela ne devrait pas arriver)
curl -s "https://${paheko_API_USER}:${paheko_API_PASSWORD}@kaz-paheko.kaz.bzh/api/sql" -d "SELECT numero,nom,quota_disque from users WHERE email='${EMAIL_SOUHAITE}'" | jq '.results[] | .numero,.nom,.quota_disque ' | tr \\n ',' | sed 's/,$/,Aucune\n/' >> "${TEMP_PAHEKO}"
curl -s "https://${API_USER}:${API_PASSWORD}@kaz-paheko.kaz.bzh/api/sql" -d "SELECT numero,nom,quota_disque from users WHERE email='${EMAIL_SOUHAITE}'" | jq '.results[] | .numero,.nom,.quota_disque ' | tr \\n ',' | sed 's/,$/,Aucune\n/' >> "${TEMP_PAHEKO}"
####################
# Inscription MAIL #
@@ -760,7 +757,7 @@ ${MAIL_KAZ}
EOF" | tee -a "${CMD_MSG}"
echo " # on envoie la confirmation d'inscription sur l'agora " | tee -a "${CMD_MSG}"
echo "docker exec -ti mattermostServ bin/mmctl post create kaz:Creation-Comptes --message \"${MAIL_KAZ}\"" | tee -a "${CMD_MSG}"
echo "docker exec -i mattermostServ bin/mmctl post create kaz:Creation-Comptes --message \"${MAIL_KAZ}\"" | tee -a "${CMD_MSG}"
# fin des inscriptions
done <<< "${ALL_LINES}"

View File

@@ -1,6 +1,11 @@
#!/bin/bash
#/bin/bash
# list/ajout/supprime/ un sous-domaine
#koi: gestion des records dns sur AlwaysData
#ki: fanch&gaël&fab
#kan: 06/04/2025
#doc: https://api.alwaysdata.com/v1/record/doc/
#doc: https://help.alwaysdata.com/fr/api/
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
@@ -15,6 +20,7 @@ export ETC_HOSTS="/etc/hosts"
# no more export in .env
export $(set | grep "domain=")
#TODO: récupérer la liste des services kaz au lieu des les écrire en dur
declare -a forbidenName
forbidenName=(${calcHost} calc ${cloudHost} bureau ${dateHost} date ${dokuwikiHost} dokuwiki ${fileHost} file ${ldapHost} ${pahekoHost} ${gitHost} ${gravHost} ${matterHost} ${officeHost} collabora ${padHost} ${sympaHost} listes ${webmailHost} ${wordpressHost} www ${vigiloHost} form)
@@ -31,6 +37,15 @@ usage(){
exit 1
}
. "${KAZ_KEY_DIR}/env-alwaysdata"
if [[ -z "${ALWAYSDATA_TOKEN}" ]] ; then
echo "no ALWAYSDATA_TOKEN set in ${KAZ_KEY_DIR}/env-alwaysdata"
usage
fi
DOMAIN_ID=$(curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" ${ALWAYSDATA_API}/domain/?name=${domain} | jq '.[0].id')
for ARG in $@
do
case "${ARG}" in
@@ -60,78 +75,15 @@ if [ -z "${CMD}" ]; then
usage
fi
. "${KAZ_KEY_DIR}/env-gandi"
if [[ -z "${GANDI_KEY}" ]] ; then
echo
echo "no GANDI_KEY set in ${KAZ_KEY_DIR}/env-gandi"
usage
fi
waitNet () {
if [[ "${domain}" = "kaz.local" ]]; then
return
fi
### wait when error code 503
if [[ $(curl -H "authorization: Apikey ${GANDI_KEY}" --connect-timeout 2 -s -D - "${GANDI_API}" -o /dev/null 2>/dev/null | head -n1) != *200* ]]; then
echo "DNS not available. Please wait..."
while [[ $(curl -H "authorization: Apikey ${GANDI_KEY}" --connect-timeout 2 -s -D - "${GANDI_API}" -o /dev/null 2>/dev/null | head -n1) != *200* ]]
do
sleep 5
done
exit
fi
}
list(){
if [[ "${domain}" = "kaz.local" ]]; then
grep --perl-regex "^${IP}\s.*${domain}" "${ETC_HOSTS}" 2> /dev/null | sed -e "s|^${IP}\s*\([0-9a-z.-]${domain}\)$|\1|g"
return
fi
waitNet
trap 'rm -f "${TMPFILE}"' EXIT
TMPFILE="$(mktemp)" || exit 1
if [[ -n "${SIMU}" ]] ; then
${SIMU} curl -X GET "${GANDI_API}/records" -H "authorization: Apikey ${GANDI_KEY}"
else
curl -X GET "${GANDI_API}/records" -H "authorization: Apikey ${GANDI_KEY}" 2>/dev/null | \
sed "s/,{/\n/g" | \
sed 's/.*rrset_name":"\([^"]*\)".*rrset_values":\["\([^"]*\)".*/\1:\2/g'| \
grep -v '^[_@]'| \
grep -e ":${domain}\.*$" -e ":prod[0-9]*$" > ${TMPFILE}
fi
if [ $# -lt 1 ]; then
cat ${TMPFILE}
else
for ARG in $@
do
cat ${TMPFILE} | grep "${ARG}.*:"
done
fi
TARGET=$@
LISTE=$(curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/?domain=${DOMAIN_ID}&type=CNAME&name=${TARGET}" | jq '.[] | "\(.name):\(.value)"')
echo ${LISTE}
}
saveDns () {
for ARG in $@ ; do
if [[ "${ARG}" =~ .local$ ]] ; then
echo "${PRG}: old fasion style (remove .local at the end)"
usage;
fi
if [[ "${ARG}" =~ .bzh$ ]] ; then
echo "${PRG}: old fasion style (remove .bzh at the end)"
usage;
fi
if [[ "${ARG}" =~ .dev$ ]] ; then
echo "${PRG}: old fasion style (remove .dev at the end)"
usage;
fi
done
if [[ "${domain}" = "kaz.local" ]]; then
return
fi
waitNet
${SIMU} curl -X POST "${GANDI_API}/snapshots" -H "authorization: Apikey ${GANDI_KEY}" 2>/dev/null
mkdir -p /root/dns
${SIMU} curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/?domain=${DOMAIN_ID}" -o /root/dns/dns_save_$(date +'%Y%m%d%H%M%S')
}
badName(){
@@ -154,28 +106,14 @@ add(){
echo "can't manage '${ARG}'. Use -f option"
continue
fi
case "${domain}" in
kaz.local )
if grep -q --perl-regex "^${IP}.*[ \t]${ARG}.${domain}" "${ETC_HOSTS}" 2> /dev/null ; then
break
fi
if grep -q --perl-regex "^${IP}[ \t]" "${ETC_HOSTS}" 2> /dev/null ; then
${SIMU} sudo sed -i -e "0,/^${IP}[ \t]/s/^\(${IP}[ \t]\)/\1${ARG}.${domain} /g" "${ETC_HOSTS}"
else
${SIMU} sudo sed -i -e "$ a ${IP}\t${ARG}.${domain}" "${ETC_HOSTS}" 2> /dev/null
fi
;;
*)
${SIMU} curl -X POST "${GANDI_API}/records" -H "authorization: Apikey ${GANDI_KEY}" -H 'content-type: application/json' -d '{"rrset_type":"CNAME", "rrset_name":"'${ARG}'", "rrset_values":["'${site}'"]}'
echo
;;
esac
${SIMU} curl -s -X POST -d "{\"domain\":\"${DOMAIN_ID}\", \"type\":\"CNAME\", \"name\":\"${ARG}\", \"value\":\"${site}.${domain}\"}" --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/"
ADDED+=("${ARG}")
done
echo "Domains added to ${domain}: ${ADDED[@]}"
}
del(){
if [ $# -lt 1 ]; then
exit
fi
@@ -187,23 +125,11 @@ del(){
echo "can't manage '${ARG}'. Use -f option"
continue
fi
case "${domain}" in
kaz.local )
if !grep -q --perl-regex "^${IP}.*[ \t]${ARG}.${domain}" "${ETC_HOSTS}" 2> /dev/null ; then
break
fi
${SIMU} sudo sed -i -e "/^${IP}[ \t]*${ARG}.${domain}[ \t]*$/d" \
-e "s|^\(${IP}.*\)[ \t]${ARG}.${domain}|\1|g" "${ETC_HOSTS}"
;;
* )
${SIMU} curl -X DELETE "${GANDI_API}/records/${ARG}" -H "authorization: Apikey ${GANDI_KEY}"
echo
;;
esac
RECORD_ID=$(curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/?name=${ARG}&type=CNAME&domain=${DOMAIN_ID}" | jq ".[] | select(.name==\"${ARG}\").id")
${SIMU} curl -s -X DELETE --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/${RECORD_ID}/"
REMOVED+=("${ARG}")
done
echo "Domains removed from ${domain}: ${REMOVED[@]}"
}
#echo "CMD: ${CMD} $*"
${CMD} $*

135
bin/dns_alwaysdata.sh Executable file
View File

@@ -0,0 +1,135 @@
#/bin/bash
#koi: gestion des records dns sur AlwaysData
#ki: fanch&gaël&fab
#kan: 06/04/2025
#doc: https://api.alwaysdata.com/v1/record/doc/
#doc: https://help.alwaysdata.com/fr/api/
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
. "${DOCKERS_ENV}"
cd "${KAZ_ROOT}"
export PRG="$0"
export IP="127.0.0.1"
export ETC_HOSTS="/etc/hosts"
# no more export in .env
export $(set | grep "domain=")
#TODO: récupérer la liste des services kaz au lieu des les écrire en dur
declare -a forbidenName
forbidenName=(${calcHost} calc ${cloudHost} bureau ${dateHost} date ${dokuwikiHost} dokuwiki ${fileHost} file ${ldapHost} ${pahekoHost} ${gitHost} ${gravHost} ${matterHost} ${officeHost} collabora ${padHost} ${sympaHost} listes ${webmailHost} ${wordpressHost} www ${vigiloHost} form)
export FORCE="NO"
export CMD=""
export SIMU=""
usage(){
echo "Usage: ${PRG} list [sub-domain...]"
echo " ${PRG} [-n] [-f] {add/del} sub-domain..."
echo " -h help"
echo " -n simulation"
echo " -f force protected domain"
exit 1
}
. "${KAZ_KEY_DIR}/env-alwaysdata"
if [[ -z "${ALWAYSDATA_TOKEN}" ]] ; then
echo "no ALWAYSDATA_TOKEN set in ${KAZ_KEY_DIR}/env-alwaysdata"
usage
fi
DOMAIN_ID=$(curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" ${ALWAYSDATA_API}/domain/?name=${domain} | jq '.[0].id')
for ARG in $@
do
case "${ARG}" in
'-h' | '-help' )
usage
;;
'-f' )
shift
export FORCE="YES"
;;
'-n' )
shift
export SIMU="echo"
;;
'list'|'add'|'del' )
shift
CMD="${ARG}"
break
;;
* )
usage
;;
esac
done
if [ -z "${CMD}" ]; then
usage
fi
list(){
TARGET=$@
LISTE=$(curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/?domain=${DOMAIN_ID}&type=CNAME&name=${TARGET}" | jq '.[] | "\(.name):\(.value)"')
echo ${LISTE}
}
saveDns () {
mkdir -p /root/dns
${SIMU} curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/?domain=${DOMAIN_ID}" -o /root/dns/dns_save_$(date +'%Y%m%d%H%M%S')
}
badName(){
[[ -z "$1" ]] && return 0;
for item in "${forbidenName[@]}"; do
[[ "${item}" == "$1" ]] && [[ "${FORCE}" == "NO" ]] && return 0
done
return 1
}
add(){
if [ $# -lt 1 ]; then
exit
fi
saveDns $@
declare -a ADDED
for ARG in $@
do
if badName "${ARG}" ; then
echo "can't manage '${ARG}'. Use -f option"
continue
fi
${SIMU} curl -s -X POST -d "{\"domain\":\"${DOMAIN_ID}\", \"type\":\"CNAME\", \"name\":\"${ARG}\", \"value\":\"${site}.${domain}\"}" --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/"
ADDED+=("${ARG}")
done
echo "Domains added to ${domain}: ${ADDED[@]}"
}
del(){
if [ $# -lt 1 ]; then
exit
fi
saveDns $@
declare -a REMOVED
for ARG in $@
do
if badName "${ARG}" ; then
echo "can't manage '${ARG}'. Use -f option"
continue
fi
RECORD_ID=$(curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/?name=${ARG}&type=CNAME&domain=${DOMAIN_ID}" | jq ".[] | select(.name==\"${ARG}\").id")
${SIMU} curl -s -X DELETE --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/${RECORD_ID}/"
REMOVED+=("${ARG}")
done
echo "Domains removed from ${domain}: ${REMOVED[@]}"
}
${CMD} $*

209
bin/dns_gandi.sh Executable file
View File

@@ -0,0 +1,209 @@
#!/bin/bash
# list/ajout/supprime/ un sous-domaine
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
. "${DOCKERS_ENV}"
cd "${KAZ_ROOT}"
export PRG="$0"
export IP="127.0.0.1"
export ETC_HOSTS="/etc/hosts"
# no more export in .env
export $(set | grep "domain=")
declare -a forbidenName
forbidenName=(${calcHost} calc ${cloudHost} bureau ${dateHost} date ${dokuwikiHost} dokuwiki ${fileHost} file ${ldapHost} ${pahekoHost} ${gitHost} ${gravHost} ${matterHost} ${officeHost} collabora ${padHost} ${sympaHost} listes ${webmailHost} ${wordpressHost} www ${vigiloHost} form)
export FORCE="NO"
export CMD=""
export SIMU=""
usage(){
echo "Usage: ${PRG} list [sub-domain...]"
echo " ${PRG} [-n] [-f] {add/del} sub-domain..."
echo " -h help"
echo " -n simulation"
echo " -f force protected domain"
exit 1
}
for ARG in $@
do
case "${ARG}" in
'-h' | '-help' )
usage
;;
'-f' )
shift
export FORCE="YES"
;;
'-n' )
shift
export SIMU="echo"
;;
'list'|'add'|'del' )
shift
CMD="${ARG}"
break
;;
* )
usage
;;
esac
done
if [ -z "${CMD}" ]; then
usage
fi
. "${KAZ_KEY_DIR}/env-gandi"
if [[ -z "${GANDI_KEY}" ]] ; then
echo
echo "no GANDI_KEY set in ${KAZ_KEY_DIR}/env-gandi"
usage
fi
waitNet () {
if [[ "${domain}" = "kaz.local" ]]; then
return
fi
### wait when error code 503
if [[ $(curl -H "authorization: Apikey ${GANDI_KEY}" --connect-timeout 2 -s -D - "${GANDI_API}" -o /dev/null 2>/dev/null | head -n1) != *200* ]]; then
echo "DNS not available. Please wait..."
while [[ $(curl -H "authorization: Apikey ${GANDI_KEY}" --connect-timeout 2 -s -D - "${GANDI_API}" -o /dev/null 2>/dev/null | head -n1) != *200* ]]
do
sleep 5
done
exit
fi
}
list(){
if [[ "${domain}" = "kaz.local" ]]; then
grep --perl-regex "^${IP}\s.*${domain}" "${ETC_HOSTS}" 2> /dev/null | sed -e "s|^${IP}\s*\([0-9a-z.-]${domain}\)$|\1|g"
return
fi
waitNet
trap 'rm -f "${TMPFILE}"' EXIT
TMPFILE="$(mktemp)" || exit 1
if [[ -n "${SIMU}" ]] ; then
${SIMU} curl -X GET "${GANDI_API}/records" -H "authorization: Apikey ${GANDI_KEY}"
else
curl -X GET "${GANDI_API}/records" -H "authorization: Apikey ${GANDI_KEY}" 2>/dev/null | \
sed "s/,{/\n/g" | \
sed 's/.*rrset_name":"\([^"]*\)".*rrset_values":\["\([^"]*\)".*/\1:\2/g'| \
grep -v '^[_@]'| \
grep -e ":${domain}\.*$" -e ":prod[0-9]*$" > ${TMPFILE}
fi
if [ $# -lt 1 ]; then
cat ${TMPFILE}
else
for ARG in $@
do
cat ${TMPFILE} | grep "${ARG}.*:"
done
fi
}
saveDns () {
for ARG in $@ ; do
if [[ "${ARG}" =~ .local$ ]] ; then
echo "${PRG}: old fasion style (remove .local at the end)"
usage;
fi
if [[ "${ARG}" =~ .bzh$ ]] ; then
echo "${PRG}: old fasion style (remove .bzh at the end)"
usage;
fi
if [[ "${ARG}" =~ .dev$ ]] ; then
echo "${PRG}: old fasion style (remove .dev at the end)"
usage;
fi
done
if [[ "${domain}" = "kaz.local" ]]; then
return
fi
waitNet
${SIMU} curl -X POST "${GANDI_API}/snapshots" -H "authorization: Apikey ${GANDI_KEY}" 2>/dev/null
}
badName(){
[[ -z "$1" ]] && return 0;
for item in "${forbidenName[@]}"; do
[[ "${item}" == "$1" ]] && [[ "${FORCE}" == "NO" ]] && return 0
done
return 1
}
add(){
if [ $# -lt 1 ]; then
exit
fi
saveDns $@
declare -a ADDED
for ARG in $@
do
if badName "${ARG}" ; then
echo "can't manage '${ARG}'. Use -f option"
continue
fi
case "${domain}" in
kaz.local )
if grep -q --perl-regex "^${IP}.*[ \t]${ARG}.${domain}" "${ETC_HOSTS}" 2> /dev/null ; then
break
fi
if grep -q --perl-regex "^${IP}[ \t]" "${ETC_HOSTS}" 2> /dev/null ; then
${SIMU} sudo sed -i -e "0,/^${IP}[ \t]/s/^\(${IP}[ \t]\)/\1${ARG}.${domain} /g" "${ETC_HOSTS}"
else
${SIMU} sudo sed -i -e "$ a ${IP}\t${ARG}.${domain}" "${ETC_HOSTS}" 2> /dev/null
fi
;;
*)
${SIMU} curl -X POST "${GANDI_API}/records" -H "authorization: Apikey ${GANDI_KEY}" -H 'content-type: application/json' -d '{"rrset_type":"CNAME", "rrset_name":"'${ARG}'", "rrset_values":["'${site}'"]}'
echo
;;
esac
ADDED+=("${ARG}")
done
echo "Domains added to ${domain}: ${ADDED[@]}"
}
del(){
if [ $# -lt 1 ]; then
exit
fi
saveDns $@
declare -a REMOVED
for ARG in $@
do
if badName "${ARG}" ; then
echo "can't manage '${ARG}'. Use -f option"
continue
fi
case "${domain}" in
kaz.local )
if !grep -q --perl-regex "^${IP}.*[ \t]${ARG}.${domain}" "${ETC_HOSTS}" 2> /dev/null ; then
break
fi
${SIMU} sudo sed -i -e "/^${IP}[ \t]*${ARG}.${domain}[ \t]*$/d" \
-e "s|^\(${IP}.*\)[ \t]${ARG}.${domain}|\1|g" "${ETC_HOSTS}"
;;
* )
${SIMU} curl -X DELETE "${GANDI_API}/records/${ARG}" -H "authorization: Apikey ${GANDI_KEY}"
echo
;;
esac
REMOVED+=("${ARG}")
done
echo "Domains removed from ${domain}: ${REMOVED[@]}"
}
#echo "CMD: ${CMD} $*"
${CMD} $*

176
bin/dynDNS.sh Executable file
View File

@@ -0,0 +1,176 @@
#!/bin/bash
# nohup /kaz/bin/dynDNS.sh &
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
. "${DOCKERS_ENV}"
# no more export in .env
export $(set | grep "domain=")
cd "${KAZ_ROOT}"
export PRG="$0"
export MYHOST="${site}"
MYIP_URL="https://kaz.bzh/myip.php"
DNS_IP=""
DELAI_WAIT=10 # DNS occupé
DELAI_GET=5 # min entre 2 requêtes
DELAI_CHANGE=3600 # propagation 1h
DELAI_NO_CHANGE=300 # pas de changement 5 min
BOLD='\e[1m'
RED='\e[0;31m'
GREEN='\e[0;32m'
YELLOW='\e[0;33m'
BLUE='\e[0;34m'
MAGENTA='\e[0;35m'
CYAN='\e[0;36m'
NC='\e[0m' # No Color
NL='
'
export VERBOSE=""
export SIMU=""
usage(){
echo "Usage: ${PRG} list [sub-domain...]"
echo " -h help"
echo " -v verbose"
echo " -n simulation"
exit 1
}
#. "${KAZ_KEY_DIR}/env-gandi"
. "${KAZ_KEY_DIR}/env-alwaysdata"
if [[ -z "${ALWAYSDATA_TOKEN}" ]] ; then
echo "no ALWAYSDATA_TOKEN set in ${KAZ_KEY_DIR}/env-alwaysdata"
usage
fi
DOMAIN_ID=$(curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" ${ALWAYSDATA_API}/domain/?name=${domain} | jq '.[0].id')
if [[ -z "${DOMAIN_ID}" ]] ; then
echo "no DOMAIN_ID give by alwaysdata"
usage
fi
# if [[ -z "${GANDI_KEY}" ]] ; then
# echo
# echo "no GANDI_KEY set in ${KAZ_KEY_DIR}/env-gandi"
# usage
# exit
# fi
for ARG in $@
do
case "${ARG}" in
'-h' | '-help' )
usage
;;
'-v' )
shift
export VERBOSE=":"
;;
'-n' )
shift
export SIMU="echo"
;;
* )
usage
;;
esac
done
log () {
echo -e "${BLUE}$(date +%d-%m-%Y-%H-%M-%S)${NC} : $*"
}
simu () {
echo -e "${YELLOW}$(date +%d-%m-%Y-%H-%M-%S)${NC} : $*"
}
cmdWait () {
#ex gandi
#curl -H "authorization: Apikey ${GANDI_KEY}" --connect-timeout 2 -s -D - -o /dev/null "${GANDI_API}" 2>/dev/null
curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" --connect-timeout 2 -D - -o /dev/null "${ALWAYSDATA_API}/record/?domain=${DOMAIN_ID}&type=CNAME&name=${TARGET}" 2>/dev/null
}
waitNet () {
### wait when error code 503
if [[ $(cmdWait | head -n1) != *200* ]]; then
log "DNS not available. Please wait..."
while [[ $(cmdWait | head -n1) != *200* ]]; do
[[ -z "${VERBOSE}" ]] || simu curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" --connect-timeout 2 -D - -o /dev/null "${ALWAYSDATA_API}/record/?domain=${DOMAIN_ID}&type=CNAME&name=${TARGET}"
sleep "${DELAI_WAIT}"
done
exit
fi
}
getDNS () {
# curl -s -X GET "${GANDI_API}/records" -H "authorization: Apikey ${GANDI_KEY}"|
# sed "s/,{/\n/g"|
# sed 's/.*rrset_name":"\([^"]*\)".*rrset_values":\["\([^"]*\)".*/\1:\2/g'|
# grep -e "^${MYHOST}:"|
# sed "s/^${MYHOST}://g" |
# tr -d '\n\t\r '
${SIMU} curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/?domain=${DOMAIN_ID}&type=A&name=${MYHOST}" | jq '.[] | "\(.value)"' | tr -d '"'
}
saveDns () {
mkdir -p /root/dns
${SIMU} curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/?domain=${DOMAIN_ID}" -o /root/dns/dns_save_$(date +'%Y%m%d%H%M%S')
}
setDNS () {
saveDns
# curl -s -X POST "${GANDI_API}/records" -H "authorization: Apikey ${GANDI_KEY}" -H 'content-type: application/json' -d '{"rrset_type":"A", "rrset_name":"'${MYHOST}'", "rrset_values":["'${IP}'"]}'
${SIMU} curl -s -X POST -d "{\"domain\":\"${DOMAIN_ID}\", \"type\":\"A\", \"name\":\"${MYHOST}\", \"value\":\"${IP}\"}" --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/"
}
while :; do
sleep "${DELAI_GET}"
IP=$(curl -s "${MYIP_URL}" | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}' | tr -d '\n\t\r ')
if ! [[ ${IP} =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
log "BAB IP ${IP}" ; continue
fi
if [ -z "${DNS_IP}" ]; then
# Variable pas encore initialisée
waitNet
DNS_IP=$(getDNS)
if [ -z "${DNS_IP}" ]; then
# C'est la première fois que le site est en prod
log "set ${MYHOST} : ${IP}"
setDNS
DNS_IP=$(getDNS)
log "DNS set ${MYHOST}:${IP} (=${DNS_IP})"
sleep "${DELAI_CHANGE}"
continue
fi
fi
if [ "${DNS_IP}" != "${IP}" ]; then
log "${MYHOST} : ${DNS_IP} must change to ${IP}"
# Changement d'adresse
waitNet
#curl -s -X DELETE "${GANDI_API}/records/${MYHOST}" -H "authorization: Apikey ${GANDI_KEY}"
RECORD_ID=$(curl -s -X GET --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/?name=${MYHOST}&type=A&domain=${DOMAIN_ID}" | jq ".[] | select(.name==\"${MYHOST}\").id")
${SIMU} curl -s -X DELETE --basic --user "${ALWAYSDATA_TOKEN} account=${ALWAYSDATA_ACCOUNT}:" "${ALWAYSDATA_API}/record/${RECORD_ID}/"
setDNS
DNS_IP=$(getDNS)
log "DNS reset ${MYHOST}:${IP} (=${DNS_IP})"
sleep "${DELAI_CHANGE}"
else
log "OK ${MYHOST}:${DNS_IP} / ${IP}"
sleep ${DELAI_NO_CHANGE}
fi
done

View File

@@ -7,7 +7,6 @@ KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
. $KAZ_ROOT/secret/env-kaz
PRG=$(basename $0)
@@ -23,7 +22,7 @@ PRG=$(basename $0)
# TEMPO_ACTION_STOP=2 # Lors de redémarrage avec tempo, on attend après le stop
# TEMPO_ACTION_START=60 # Lors de redémarrage avec tempo, avant de reload le proxy
# DEFAULTCONTAINERS="cloud agora wp wiki office paheko castopod"
# DEFAULTCONTAINERS="cloud agora wp wiki office paheko castopod spip"
# APPLIS_PAR_DEFAUT="tasks calendar contacts bookmarks mail richdocuments external drawio snappymail ransomware_protection" #rainloop richdocumentscode
@@ -42,16 +41,16 @@ CONTAINERS_TYPES=
declare -A DockerServNames # le nom des containers correspondant
DockerServNames=( [cloud]="${nextcloudServName}" [agora]="${mattermostServName}" [wiki]="${dokuwikiServName}" [wp]="${wordpressServName}" [office]="${officeServName}" [paheko]="${pahekoServName}" [castopod]="${castopodServName}" )
DockerServNames=( [cloud]="${nextcloudServName}" [agora]="${mattermostServName}" [wiki]="${dokuwikiServName}" [wp]="${wordpressServName}" [office]="${officeServName}" [paheko]="${pahekoServName}" [castopod]="${castopodServName}" [spip]="${spipServName}" )
declare -A FilterLsVolume # Pour trouver quel volume appartient à quel container
FilterLsVolume=( [cloud]="cloudMain" [agora]="matterConfig" [wiki]="wikiConf" [wp]="wordpress" [castopod]="castopodMedia" )
FilterLsVolume=( [cloud]="cloudMain" [agora]="matterConfig" [wiki]="wikiConf" [wp]="wordpress" [castopod]="castopodMedia" [spip]="spip")
declare -A composeDirs # Le nom du repertoire compose pour le commun
composeDirs=( [cloud]="cloud" [agora]="mattermost" [wiki]="dokuwiki" [office]="collabora" [paheko]="paheko" [castopod]="castopod" )
composeDirs=( [cloud]="cloud" [agora]="mattermost" [wiki]="dokuwiki" [office]="collabora" [paheko]="paheko" [castopod]="castopod" [spip]="spip")
declare -A serviceNames # Le nom du du service dans le dockerfile d'orga
serviceNames=( [cloud]="cloud" [agora]="agora" [wiki]="dokuwiki" [wp]="wordpress" [office]="collabora" [castopod]="castopod")
serviceNames=( [cloud]="cloud" [agora]="agora" [wiki]="dokuwiki" [wp]="wordpress" [office]="collabora" [castopod]="castopod" [spip]="spip")
declare -A subScripts
subScripts=( [cloud]="manageCloud.sh" [agora]="manageAgora.sh" [wiki]="manageWiki.sh" [wp]="manageWp.sh" [castopod]="manageCastopod.sh" )
@@ -93,6 +92,7 @@ CONTAINERS_TYPES
-office Les collabora
-paheko Le paheko
-castopod Les castopod
-spip Les spip
COMMANDES (on peut en mettre plusieurs dans l'ordre souhaité)
-I|--install L'initialisation du container
@@ -551,6 +551,8 @@ for ARG in "$@"; do
CONTAINERS_TYPES="${CONTAINERS_TYPES} paheko" ;;
'-pod'|'--pod'|'-castopod'|'--castopod')
CONTAINERS_TYPES="${CONTAINERS_TYPES} castopod" ;;
'-spip')
CONTAINERS_TYPES="${CONTAINERS_TYPES} spip" ;;
'-t' )
COMMANDS="${COMMANDS} RESTART-COMPOSE" ;;
'-r' )

View File

@@ -7,7 +7,7 @@ KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
PRG=$(basename $0)

View File

@@ -1,14 +1,19 @@
#!/bin/bash
# gestion des utilisateurs de kaz ( mail, cloud général, mattermost )
# Ki : Did
# koi : gestion globale des users Kaz mais aussi les users d'autres domaines hébergés
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
. $KAZ_KEY_DIR/env-ldapServ
. $KAZ_KEY_DIR/env-nextcloudServ
. $KAZ_KEY_DIR/env-sympaServ
. $KAZ_KEY_DIR/env-paheko
VERSION="5-12-2024"
VERSION="18-05-2025"
PRG=$(basename $0)
RACINE=$(echo $PRG | awk '{print $1}')
IFS=' '
@@ -18,11 +23,11 @@ LOG=$RACINE".log"
URL_NC=$(echo $cloudHost).$(echo $domain)
URL_AGORA=$(echo $matterHost).$(echo $domain)
URL_LISTE=$(echo $sympaHost).$(echo $domain)
URL_PAHEKO="$httpProto://${paheko_API_USER}:${paheko_API_PASSWORD}@kaz-paheko.$(echo $domain)"
URL_PAHEKO="$httpProto://${API_USER}:${API_PASSWORD}@kaz-paheko.$(echo $domain)"
NL_LIST=infos@listes.kaz.bzh
URL_AGORA_API=${URL_AGORA}/api/v4
EQUIPE=kaz
LISTMASTER=$(echo ${sympa_LISTMASTERS} | cut -d',' -f1)
LISTMASTER=$(echo ${LISTMASTERS} | cut -d',' -f1)
#### Test du serveur sur lequel s' execute le script ####
@@ -45,6 +50,8 @@ rm -rf /tmp/*.json
############################################ Fonctions #######################################################
ExpMail() {
. $KAZ_KEY_DIR/env-mail
MAIL_DEST=$1
MAIL_SUJET=$2
MAIL_TEXTE=$3
@@ -56,6 +63,7 @@ ExpMail() {
}
PostMattermost() {
. $KAZ_KEY_DIR/env-mattermostAdmin
PostM=$1
CHANNEL=$2
TEAMID=$(curl -s -H "Authorization: Bearer ${mattermost_token}" "${URL_AGORA_API}/teams/name/${EQUIPE}" | jq .id | sed -e 's/"//g')
@@ -89,8 +97,8 @@ searchEmail() {
fi
done
ldapsearch -H ldap://${LDAP_IP} \
-x -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" \
-w "${ldap_LDAP_ADMIN_PASSWORD}" \
-x -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" \
-w "${LDAP_ADMIN_PASSWORD}" \
-b "${ldap_root}" "(&(objectclass=${SEARCH_OBJECT_CLASS})(cn=*${RMAIL}*))" cn | grep ^cn | sed -e 's/^cn: //' >$TFILE_EMAILS
COMPTEUR_LIGNE=0
while read LIGNE
@@ -134,6 +142,7 @@ searchEmail() {
searchMattermost() {
#Ici $1 est une adresse email
. $KAZ_KEY_DIR/env-mattermostAdmin
docker exec -ti ${mattermostServName} bin/mmctl --suppress-warnings auth login $httpProto://$URL_AGORA --name local-server --username $mattermost_user --password $mattermost_pass >/dev/null 2>&1
docker exec -ti ${mattermostServName} bin/mmctl --suppress-warnings config set ServiceSettings.EnableAPIUserDeletion "true" >/dev/null 2>&1
#on créé la list des mails dans mattermost
@@ -180,12 +189,12 @@ infoEmail() {
printKazMsg " DETAILS DU COMPTE DANS NEXTCLOUD PRINCIPAL"
echo -e ""
#TEMP_USER_NC=$(mktemp /tmp/$RACINE.XXXXXXXXX.TEMP_USER_NC)
#curl -s -o $TEMP_USER_NC -X GET -H 'OCS-APIRequest:true' $httpProto://admin:$nextcloud_NEXTCLOUD_ADMIN_PASSWORD@$URL_NC/ocs/v1.php/cloud/users?search=$CHOIX_MAIL
#curl -s -o $TEMP_USER_NC -X GET -H 'OCS-APIRequest:true' $httpProto://admin:$NEXTCLOUD_ADMIN_PASSWORD@$URL_NC/ocs/v1.php/cloud/users?search=$CHOIX_MAIL
#cat $TEMP_USER_NC | grep -i "element" | sed -e s/[\<\>\/]//g | sed -e s/element//g
echo -ne "${NC}"
echo -ne " - Nextcloud enable : "
echo -ne "${GREEN}"
ldapsearch -H ldap://${LDAP_IP} -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -x -w "${ldap_LDAP_ADMIN_PASSWORD}" -b "cn=${CHOIX_MAIL},ou=users,${ldap_root}" | grep -i nextcloudEnabled | cut -c 18-30
ldapsearch -H ldap://${LDAP_IP} -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" -x -w "${LDAP_ADMIN_PASSWORD}" -b "cn=${CHOIX_MAIL},ou=users,${ldap_root}" | grep -i nextcloudEnabled | cut -c 18-30
echo -ne "${NC}"
echo -e "${NC} ------------------------------------------------"
printKazMsg " DETAILS DU COMPTE DANS LDAP ET PAHEKO"
@@ -201,11 +210,11 @@ infoEmail() {
echo -ne "${NC}"
echo -n " - Quota Mail (Ldap) : "
echo -ne "${GREEN}"
ldapsearch -H ldap://${LDAP_IP} -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -x -w "${ldap_LDAP_ADMIN_PASSWORD}" -b "cn=${CHOIX_MAIL},ou=users,${ldap_root}" | grep -i mailquota | cut -c 11-60
ldapsearch -H ldap://${LDAP_IP} -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" -x -w "${LDAP_ADMIN_PASSWORD}" -b "cn=${CHOIX_MAIL},ou=users,${ldap_root}" | grep -i mailquota | cut -c 11-60
echo -ne "${NC}"
echo -n " - Quota Nextcloud (Ldap) : "
echo -ne "${GREEN}"
ldapsearch -H ldap://${LDAP_IP} -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -x -w "${ldap_LDAP_ADMIN_PASSWORD}" -b "cn=${CHOIX_MAIL},ou=users,${ldap_root}" | grep -i nextcloudquota | cut -c 17-60
ldapsearch -H ldap://${LDAP_IP} -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" -x -w "${LDAP_ADMIN_PASSWORD}" -b "cn=${CHOIX_MAIL},ou=users,${ldap_root}" | grep -i nextcloudquota | cut -c 17-60
echo -ne "${NC}"
echo -n " - Mail de secours (Paheko ): "
echo -ne "${GREEN}"
@@ -213,11 +222,11 @@ infoEmail() {
echo -ne "${NC}"
echo -n " - Mail de secours (Ldap): "
echo -ne "${GREEN}"
ldapsearch -H ldap://${LDAP_IP} -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -x -w "${ldap_LDAP_ADMIN_PASSWORD}" -b "cn=${CHOIX_MAIL},ou=users,${ldap_root}" | grep -i maildeSecours | sed -e 's/mailDeSecours://'
ldapsearch -H ldap://${LDAP_IP} -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" -x -w "${LDAP_ADMIN_PASSWORD}" -b "cn=${CHOIX_MAIL},ou=users,${ldap_root}" | grep -i maildeSecours | sed -e 's/mailDeSecours://'
echo -ne "${NC}"
echo -n " - Alias (Ldap) : "
echo -ne "${GREEN}"
LDAP_ALIAS=$(ldapsearch -H ldap://${LDAP_IP} -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -x -w "${ldap_LDAP_ADMIN_PASSWORD}" -b "cn=${CHOIX_MAIL},ou=users,${ldap_root}" | grep -i alias | cut -c 11-60)
LDAP_ALIAS=$(ldapsearch -H ldap://${LDAP_IP} -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" -x -w "${LDAP_ADMIN_PASSWORD}" -b "cn=${CHOIX_MAIL},ou=users,${ldap_root}" | grep -i alias | cut -c 11-60)
echo -ne "${NC}"
echo -ne "${GREEN}"
for ldap_alias in ${LDAP_ALIAS}
@@ -237,8 +246,8 @@ infoEmail() {
echo "------------------------------------------------"
echo " Alias : ${CHOIX_MAIL} "
echo ""
for INFOALIAS in $(ldapsearch -H ldap://${LDAP_IP} -x -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" \
-w "${ldap_LDAP_ADMIN_PASSWORD}" -b "${ldap_root}" "(&(objectclass=PostfixBookMailForward)(cn=*${CHOIX_MAIL}*))" mail \
for INFOALIAS in $(ldapsearch -H ldap://${LDAP_IP} -x -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" \
-w "${LDAP_ADMIN_PASSWORD}" -b "${ldap_root}" "(&(objectclass=PostfixBookMailForward)(cn=*${CHOIX_MAIL}*))" mail \
| grep ^mail: | sed -e 's/^mail://')
do
echo -ne "=====> ${GREEN} "
@@ -305,12 +314,12 @@ searchDestroy() {
fi
echo -e "${NC}"
echo -e "Recherche de ${GREEN} ${REP_SEARCH_DESTROY} ${NC} dans nextcloud"
USER_NEXTCLOUD_SUPPR=$(curl -s -X GET -H 'OCS-APIRequest:true' $httpProto://admin:$nextcloud_NEXTCLOUD_ADMIN_PASSWORD@$URL_NC/ocs/v1.php/cloud/users?search=${REP_SEARCH_DESTROY} | grep element | sed -s 's/[ \<\>\/]//g' | sed 's/element//g')
USER_NEXTCLOUD_SUPPR=$(curl -s -X GET -H 'OCS-APIRequest:true' $httpProto://admin:$NEXTCLOUD_ADMIN_PASSWORD@$URL_NC/ocs/v1.php/cloud/users?search=${REP_SEARCH_DESTROY} | grep element | sed -s 's/[ \<\>\/]//g' | sed 's/element//g')
if [ ! -z ${USER_NEXTCLOUD_SUPPR} ]
then
printKazMsg "le user trouvé est : ${USER_NEXTCLOUD_SUPPR}"
echo -e "${RED} Suppresion de ${USER_NEXTCLOUD_SUPPR}"
curl -H 'OCS-APIREQUEST: true' -X DELETE $httpProto://admin:$nextcloud_NEXTCLOUD_ADMIN_PASSWORD@$URL_NC/ocs/v1.php/cloud/users/${USER_NEXTCLOUD_SUPPR} >/dev/null 2>&1
curl -H 'OCS-APIREQUEST: true' -X DELETE $httpProto://admin:$NEXTCLOUD_ADMIN_PASSWORD@$URL_NC/ocs/v1.php/cloud/users/${USER_NEXTCLOUD_SUPPR} >/dev/null 2>&1
if [ "$?" -eq "0" ]
then
printKazMsg "Suppresion ok"
@@ -325,7 +334,7 @@ searchDestroy() {
echo -e "${RED} suppression de ${REP_SEARCH_DESTROY} dans la liste info de sympa"
echo -e "${NC}"
echo ""
docker exec -ti sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${sympa_SOAP_USER} --trusted_application_password=${sympa_SOAP_PASSWORD} --proxy_vars=USER_EMAIL=${LISTMASTER} --service=del --service_parameters="${NL_LIST},${REP_SEARCH_DESTROY}"
docker exec -ti sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${SOAP_USER} --trusted_application_password=${SOAP_PASSWORD} --proxy_vars=USER_EMAIL=${LISTMASTER} --service=del --service_parameters="${NL_LIST},${REP_SEARCH_DESTROY}"
echo -e "${NC}"
echo ""
echo -e "${RED} suppression de ${REP_SEARCH_DESTROY} dans le serveur de mail"
@@ -342,7 +351,7 @@ searchDestroy() {
echo -e "${RED} suppression de ${REP_SEARCH_DESTROY} dans le ldap"
echo -e "${NC}"
echo ""
ldapdelete -H ldap://${LDAP_IP} -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -x -w "${ldap_LDAP_ADMIN_PASSWORD}" "cn=${REP_SEARCH_DESTROY},ou=users,${ldap_root}"
ldapdelete -H ldap://${LDAP_IP} -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" -x -w "${LDAP_ADMIN_PASSWORD}" "cn=${REP_SEARCH_DESTROY},ou=users,${ldap_root}"
if [ "$?" -eq "0" ]
then
printKazMsg "Suppresion ok"
@@ -375,8 +384,8 @@ gestPassword() {
# MAIL_SECOURS=$(jq .results[].email_secours $FICMAILSECOURS | sed -e 's/\"//g')
MAIL_SECOURS=$(ldapsearch -H ldap://${LDAP_IP} \
-x -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" \
-w "${ldap_LDAP_ADMIN_PASSWORD}" \
-x -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" \
-w "${LDAP_ADMIN_PASSWORD}" \
-b "${ldap_root}" "(&(objectclass=inetOrgPerson)(cn=*${CHOIX_MAIL}*))" | grep ^mailDeSecours | sed -e 's/^mailDeSecours: //')
if [ "$MAIL_SECOURS" = "" ]
then
@@ -403,19 +412,19 @@ gestPassword() {
fi
if [ "$SEARCH_RESET_INPUT" = "o" ] || [ "$SEARCH_RESET_INPUT" = "O" ]
then
USER_NEXTCLOUD_MODIF=$(curl -s -X GET -H 'OCS-APIRequest:true' $httpProto://admin:$nextcloud_NEXTCLOUD_ADMIN_PASSWORD@$URL_NC/ocs/v1.php/cloud/users?search=${COMPTE_A_MODIFIER} | grep element | sed -e 's/[ \<\>\/]//g' -e 's/element//g')
USER_NEXTCLOUD_MODIF=$(curl -s -X GET -H 'OCS-APIRequest:true' $httpProto://admin:$NEXTCLOUD_ADMIN_PASSWORD@$URL_NC/ocs/v1.php/cloud/users?search=${COMPTE_A_MODIFIER} | grep element | sed -e 's/[ \<\>\/]//g' -e 's/element//g')
echo -e "$GREEN Compte à modifier = $RED ${COMPTE_A_MODIFIER} ${NC}"
echo -e "$GREEN Mail de secours = $RED ${MAIL_SECOURS} ${NC}"
echo -e "$GREEN Compte $RED $(searchMattermost $COMPTE_A_MODIFIER) ${NC}"
echo -e "$GREEN Compte Nextcloud $RED ${USER_NEXTCLOUD_MODIF} ${NC}"
echo -e "$GREEN Le mot de passe sera = $RED ${PASSWORD} ${NC}"
docker exec -ti mattermostServ bin/mmctl user change-password $(searchMattermost $COMPTE_A_MODIFIER) -p $PASSWORD >/dev/null 2>&1
curl -H 'OCS-APIREQUEST: true' -X PUT $httpProto://admin:$nextcloud_NEXTCLOUD_ADMIN_PASSWORD@$URL_NC/ocs/v1.php/cloud/users/${USER_NEXTCLOUD_MODIF} -d key=password -d value=${PASSWORD} >/dev/null 2>&1
curl -H 'OCS-APIREQUEST: true' -X PUT $httpProto://admin:$NEXTCLOUD_ADMIN_PASSWORD@$URL_NC/ocs/v1.php/cloud/users/${USER_NEXTCLOUD_MODIF} -d key=password -d value=${PASSWORD} >/dev/null 2>&1
pass=$(mkpasswd -m sha512crypt ${PASSWORD})
echo -e "\n\ndn: cn=${COMPTE_A_MODIFIER},ou=users,${ldap_root}\n\
changeType: modify\n\
replace: userPassword\n\
userPassword: {CRYPT}${pass}\n\n" | ldapmodify -c -H ldap://${LDAP_IP} -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -x -w "${ldap_LDAP_ADMIN_PASSWORD}"
userPassword: {CRYPT}${pass}\n\n" | ldapmodify -c -H ldap://${LDAP_IP} -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" -x -w "${LDAP_ADMIN_PASSWORD}"
echo -e "Envoi d'un message dans mattermost pour la modification du mot de passe"
docker exec -ti mattermostServ bin/mmctl post create kaz:Creation-Comptes --message "Le mot de passe du compte ${COMPTE_A_MODIFIER} a été modifié" >/dev/null 2>&1
if [ $ADRESSE_SEC == "OUI" ]
@@ -463,8 +472,8 @@ createMail() {
if [[ ${EMAIL_SOUHAITE} =~ ${regexMail} ]]
then
ldapsearch -H ldap://${LDAP_IP} \
-x -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" \
-w "${ldap_LDAP_ADMIN_PASSWORD}" \
-x -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" \
-w "${LDAP_ADMIN_PASSWORD}" \
-b "${ldap_root}" "(&(objectclass=inetOrgPerson)(cn=${EMAIL_SOUHAITE}))" cn | grep ^cn | sed -e 's/^cn: //' >$TFILE_EMAILS
if grep -q "^${EMAIL_SOUHAITE}$" "${TFILE_EMAILS}"
then
@@ -562,7 +571,7 @@ nextcloudEnabled: ${TRUE_KAZ}\n\
nextcloudQuota: ${QUOTA} GB\n\
mobilizonEnabled: ${TRUE_KAZ}\n\
agoraEnabled: ${TRUE_KAZ}\n\
userPassword: {CRYPT}${LDAPPASS}\n\n' | ldapmodify -c -H ldap://${LDAP_IP} -D \"cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}\" -x -w ${ldap_LDAP_ADMIN_PASSWORD}" >${TFILE_CREATE_MAIL}
userPassword: {CRYPT}${LDAPPASS}\n\n' | ldapmodify -c -H ldap://${LDAP_IP} -D \"cn=${LDAP_ADMIN_USERNAME},${ldap_root}\" -x -w ${LDAP_ADMIN_PASSWORD}" >${TFILE_CREATE_MAIL}
# on execute le fichier avec les données ldap pour créer l' entrée dans l' annuaire
bash ${TFILE_CREATE_MAIL} >/dev/null
# on colle le compte et le mot de passe dans le fichier
@@ -608,12 +617,12 @@ createAlias() {
if [[ ${AMAIL} =~ ${regexMail} ]]
then
RESU_ALIAS=$(ldapsearch -H ldap://${LDAP_IP} \
-x -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" \
-w "${ldap_LDAP_ADMIN_PASSWORD}" \
-x -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" \
-w "${LDAP_ADMIN_PASSWORD}" \
-b "${ldap_root}" "(&(objectclass=PostfixBookMailForward)(cn=*${AMAIL}*))" | grep ^cn | sed -e 's/^cn: //')
RESU_ALIAS_IS_MAIL=$(ldapsearch -H ldap://${LDAP_IP} \
-x -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" \
-w "${ldap_LDAP_ADMIN_PASSWORD}" \
-x -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" \
-w "${LDAP_ADMIN_PASSWORD}" \
-b "${ldap_root}" "(&(objectclass=inetOrgPerson)(cn=*${AMAIL}*))" cn | grep ^cn | sed -e 's/^cn: //')
if echo ${RESU_ALIAS} | grep -q "^${AMAIL}$" || echo ${RESU_ALIAS_IS_MAIL} | grep -q "^${AMAIL}$"
@@ -688,7 +697,7 @@ changeType: add\n\
objectClass: organizationalRole\n\
objectClass: PostfixBookMailForward\n\
mailAlias: ${AMAIL}\n\
${LDAPALAISMAIL}\n\n" | ldapmodify -c -H ldap://${LDAP_IP} -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -x -w ${ldap_LDAP_ADMIN_PASSWORD}
${LDAPALAISMAIL}\n\n" | ldapmodify -c -H ldap://${LDAP_IP} -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" -x -w ${LDAP_ADMIN_PASSWORD}
fait=1
printKazMsg "Création de ${AMAIL}"
sleep 3
@@ -720,8 +729,8 @@ delAlias() {
if [[ ${RALIAS} =~ ${regexMail} ]]
then
RESU_ALIAS=$(ldapsearch -H ldap://${LDAP_IP} \
-x -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" \
-w "${ldap_LDAP_ADMIN_PASSWORD}" \
-x -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" \
-w "${LDAP_ADMIN_PASSWORD}" \
-b "${ldap_root}" "(&(objectclass=PostfixBookMailForward)(cn=${RALIAS}))" cn | grep ^cn | sed -e 's/^cn: //')
if [ ! -z ${RESU_ALIAS} ]
then
@@ -731,7 +740,7 @@ delAlias() {
read -p "suppression de ${RESU_ALIAS} ? (o/n): " REPDELALIAS
case "${REPDELALIAS}" in
o | O )
ldapdelete -H ldap://${LDAP_IP} -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -x -w "${ldap_LDAP_ADMIN_PASSWORD}" "cn=${RESU_ALIAS},ou=mailForwardings,${ldap_root}"
ldapdelete -H ldap://${LDAP_IP} -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" -x -w "${LDAP_ADMIN_PASSWORD}" "cn=${RESU_ALIAS},ou=mailForwardings,${ldap_root}"
printKazMsg "suppression ${RESU_ALIAS} effectuée"
sleep 2
faitdel=1
@@ -767,8 +776,8 @@ modifyAlias()
ACHANGE=0
searchEmail alias
LISTE_MAIL_ALIAS=$(ldapsearch -H ldap://${LDAP_IP} \
-x -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" \
-w "${ldap_LDAP_ADMIN_PASSWORD}" \
-x -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" \
-w "${LDAP_ADMIN_PASSWORD}" \
-b "${ldap_root}" "(&(objectclass=PostfixBookMailForward)(cn=*${CHOIX_MAIL}*))" \
| grep -i ^mail: | sed -e 's/^mail: /_/' | tr -d [:space:] | sed -s 's/_/ /g')
echo "-------------------------------------------------------------------"
@@ -843,8 +852,8 @@ modifyAlias()
echo "mail: ${key}" >>${FIC_MODIF_LDIF}
done
echo "-" >>${FIC_MODIF_LDIF}
ldapmodify -c -H ldap://${LDAP_IP} -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" \
-x -w ${ldap_LDAP_ADMIN_PASSWORD} \
ldapmodify -c -H ldap://${LDAP_IP} -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" \
-x -w ${LDAP_ADMIN_PASSWORD} \
-f ${FIC_MODIF_LDIF} >/dev/null
else
printKazMsg "Pas de changement"
@@ -870,8 +879,8 @@ updateUser() {
for attribut in mailDeSecours mailAlias mailQuota nextcloudQuota
do
ATTRIB+=([${attribut}]=$(ldapsearch -H ldap://${LDAP_IP} \
-x -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" \
-w "${ldap_LDAP_ADMIN_PASSWORD}" \
-x -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" \
-w "${LDAP_ADMIN_PASSWORD}" \
-b "${ldap_root}" "(&(objectclass=inetOrgPerson)(cn=*${CHOIX_MAIL}*))" \
| grep ^"${attribut}": | sed -e 's/^'${attribut}': //' | tr -s '[:space:]' ' ' ))
# si l' attribut est mailDesecours on l' attrape et on on le stocke pour pouvoir l' enlever de sympa
@@ -968,9 +977,9 @@ updateUser() {
MAILALIAS_CHANGE=0
for VALMAIL in ${CONTENU_ATTRIBUT}
do
read -p " - On garde ${VALMAIL} (o/n) ? [o] : " READVALMAIL
read -p " - On garde ${VALMAIL} (o/n) [o] ? : " READVALMAIL
case ${READVALMAIL} in
* | "" | o | O )
"" | o | O )
NEW_CONTENU_ATTRIBUT="${NEW_CONTENU_ATTRIBUT} ${VALMAIL}"
;;
n | N )
@@ -1007,7 +1016,7 @@ updateUser() {
done
;;
"" | n | N )
#CHANGED+=([mailAlias]="${NEW_CONTENU_ATTRIBUT}")
CHANGED+=([mailAlias]="${NEW_CONTENU_ATTRIBUT}")
;;
* )
printKazMsg "Erreur"
@@ -1054,15 +1063,15 @@ updateUser() {
done
cat ${FIC_MODIF_LDIF}
sleep 3
ldapmodify -c -H ldap://${LDAP_IP} -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" \
-x -w ${ldap_LDAP_ADMIN_PASSWORD} \
ldapmodify -c -H ldap://${LDAP_IP} -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" \
-x -w ${LDAP_ADMIN_PASSWORD} \
-f ${FIC_MODIF_LDIF}
if [ ! -z ${MAILDESECOURS} ]
then
# suppression du mail de secours de la liste infos
docker exec -ti sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${sympa_SOAP_USER} --trusted_application_password=${sympa_SOAP_PASSWORD} --proxy_vars=USER_EMAIL=${LISTMASTER} --service=del --service_parameters="${NL_LIST},${MAILDESECOURSACTUEL}"
docker exec -ti sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${SOAP_USER} --trusted_application_password=${SOAP_PASSWORD} --proxy_vars=USER_EMAIL=${LISTMASTER} --service=del --service_parameters="${NL_LIST},${MAILDESECOURSACTUEL}"
# ajout de l' adresse de la nouvelle adresse de secours
docker exec -ti sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${sympa_SOAP_USER} --trusted_application_password=${sympa_SOAP_PASSWORD} --proxy_vars=USER_EMAIL=${LISTMASTER} --service=add --service_parameters="${NL_LIST},${MAILDESECOURS}"
docker exec -ti sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${SOAP_USER} --trusted_application_password=${SOAP_PASSWORD} --proxy_vars=USER_EMAIL=${LISTMASTER} --service=add --service_parameters="${NL_LIST},${MAILDESECOURS}"
fi
updateUser
fi

18
bin/getX509Certificates.sh Executable file
View File

@@ -0,0 +1,18 @@
#/bin/bash
#koi: récupération des certifs traefik vers x509 pour mail et listes
#ki: fanch
#kan: 18/04/2025
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
. "${DOCKERS_ENV}"
certificates="mail listes"
for i in ${certificates}; do
jq -r ".letsencrypt.Certificates[] | select(.domain.main==\"${i}.${domain}\") | .certificate" /var/lib/docker/volumes/traefik_letsencrypt/_data/acme.json | base64 -d > /etc/ssl/certs/${i}.pem
jq -r ".letsencrypt.Certificates[] | select(.domain.main==\"${i}.${domain}\") | .key" /var/lib/docker/volumes/traefik_letsencrypt/_data/acme.json | base64 -d > /etc/ssl/private/${i}.key
chmod 600 /etc/ssl/private/${i}.key
done

View File

@@ -214,7 +214,6 @@ fi
if [ ! -d "${KAZ_ROOT}/secret" ]; then
rsync -a "${KAZ_ROOT}/secret.tmpl/" "${KAZ_ROOT}/secret/"
. "${KAZ_ROOT}/secret/SetAllPass.sh"
"${KAZ_BIN_DIR}/secretGen.sh"
"${KAZ_BIN_DIR}/updateDockerPassword.sh"
"${KAZ_BIN_DIR}/createDBUsers.sh"
fi

View File

@@ -123,6 +123,8 @@ export DebugLog="${KAZ_ROOT}/log/log-install-$(date +%y-%m-%d-%T)-"
if [[ " ${DOCKERS_LIST[*]} " =~ " traefik " ]]; then
# on initialise traefik :-(
${KAZ_COMP_DIR}/traefik/first.sh
# on démarre traefik (plus lancé dans container.sh)
docker-compose -f ${KAZ_COMP_DIR}/traefik/docker-compose.yml up -d
fi
if [[ " ${DOCKERS_LIST[*]} " =~ " etherpad " ]]; then

View File

@@ -1,13 +1,15 @@
#!/bin/bash
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
URL_PAHEKO="$httpProto://${paheko_API_USER}:${paheko_API_PASSWORD}@kaz-paheko.$(echo $domain)"
. $KAZ_KEY_DIR/env-paheko
URL_PAHEKO="$httpProto://${API_USER}:${API_PASSWORD}@kaz-paheko.$(echo $domain)"
PRG=$(basename $0)
RACINE=$(echo $PRG | awk '{print $1}')
@@ -76,6 +78,10 @@ Int_paheko_Action() {
do
eval $VAL_GAR=$(jq .$VAL_GAR ${TFILE_INT_PAHEKO_IDFILE})
done
################################
# test du mail valide en $domain
echo ${email} | grep -i "${domain}" || { echo "le mail ${email} n'est pas en ${domain}"; exit ;}
################################
#comme tout va bien on continue
#on compte le nom de champs dans la zone nom pour gérer les noms et prénoms composés
# si il y a 3 champs, on associe les 2 premieres valeurs avec un - et on laisse le 3ème identique
@@ -145,6 +151,9 @@ Int_paheko_Action() {
nc_base="N"
admin_orga="O"
fi
#On met le mail et le mail de secours en minuscules
email=$(echo $email | tr [:upper:] [:lower:])
email_secours=$(echo $email_secours | tr [:upper:] [:lower:])
# Pour le reste on renomme les null en N ( non ) et les valeurs 1 en O ( Oui)
cloud=$(echo $cloud | sed -e 's/0/N/g' | sed -e 's/1/O/g')
paheko=$(echo $garradin | sed -e 's/0/N/g' | sed -e 's/1/O/g')
@@ -155,11 +164,11 @@ Int_paheko_Action() {
echo "$nom_ok;$prenom_ok;$email;$email_secours;$nom_orga;$admin_orga;$cloud;$paheko;$wordpress;$agora;$docuwiki;$nc_base;$groupe_nc_base;$equipe_agora;$quota_disque">>${FILE_CREATEUSER}
done
else
echo "Rien à créer"
[ "$OPTION" = "silence" ] || echo "Rien à créer"
exit 2
fi
}
#Int_paheko_Action "A créer" "silence"
Int_paheko_Action "A créer"
# Main
Int_paheko_Action "A créer" "silence"
exit 0

View File

@@ -7,6 +7,5 @@ setKazVars
FILE_LDIF=/home/sauve/ldap.ldif
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
docker exec -u 0 -i ${ldapServName} slapcat -F /opt/bitnami/openldap/etc/slapd.d -b ${ldap_root} | gzip >${FILE_LDIF}.gz

View File

@@ -5,7 +5,7 @@ KAZ_ROOT=/kaz
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
. $KAZ_KEY_DIR/env-ldapServ
LDAP_IP=$(docker inspect -f '{{.NetworkSettings.Networks.ldapNet.IPAddress}}' ldapServ)
@@ -20,4 +20,4 @@ EDITOR=${EDITOR:-vi}
EDITOR=${EDITOR:-vi}
export EDITOR=${EDITOR}
ldapvi -h $LDAP_IP -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -w ${ldap_LDAP_ADMIN_PASSWORD} --discover
ldapvi -h $LDAP_IP -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" -w ${LDAP_ADMIN_PASSWORD} --discover

View File

@@ -8,12 +8,13 @@ KAZ_ROOT=/kaz
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
. $KAZ_KEY_DIR/env-ldapServ
. $KAZ_KEY_DIR/env-paheko
ACCOUNTS=/kaz/dockers/postfix/config/postfix-accounts.cf
LDAP_IP=$(docker inspect -f '{{.NetworkSettings.Networks.ldapNet.IPAddress}}' ldapServ)
URL_GARRADIN="$httpProto://${paheko_API_USER}:${paheko_API_PASSWORD}@kaz-paheko.$(echo $domain)"
URL_GARRADIN="$httpProto://${API_USER}:${API_PASSWORD}@kaz-paheko.$(echo $domain)"
# docker exec -i nextcloudDB mysql --user=${nextcloud_MYSQL_USER} --password=${nextcloud_MYSQL_PASSWORD} ${nextcloud_MYSQL_DATABASE} <<< "select * from oc_accounts;" > /tmp/oc_accounts
ERRORS="/tmp/ldap-errors.log"
@@ -126,7 +127,7 @@ replace: agoraEnabled\n\
agoraEnabled: TRUE\n\
-\n\
replace: mobilizonEnabled\n\
mobilizonEnabled: TRUE\n\n" | tee /tmp/ldap/${mail}.ldif | ldapmodify -c -H ldap://${LDAP_IP} -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -x -w ${ldap_LDAP_ADMIN_PASSWORD}
mobilizonEnabled: TRUE\n\n" | tee /tmp/ldap/${mail}.ldif | ldapmodify -c -H ldap://${LDAP_IP} -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" -x -w ${LDAP_ADMIN_PASSWORD}
done
#replace: nextcloudEnabled\n\
@@ -164,7 +165,7 @@ do
echo -e "dn: cn=${mail},ou=users,${ldap_root}\n\
changeType: modify
replace: mailAlias\n\
$LIST\n\n" | ldapmodify -c -H ldap://${LDAP_IP} -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -x -w ${ldap_LDAP_ADMIN_PASSWORD}
$LIST\n\n" | ldapmodify -c -H ldap://${LDAP_IP} -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" -x -w ${LDAP_ADMIN_PASSWORD}
else
echo "Alias vers un mail externe, go fichier"
echo $line >> ${ALIASES_WITHLDAP}
@@ -185,7 +186,7 @@ replace: mailAlias\n\
mailAlias: ${src}\n\
-\n\
replace: mail\n\
mail: ${dst}\n\n" | ldapmodify -c -H ldap://${LDAP_IP} -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -x -w ${ldap_LDAP_ADMIN_PASSWORD}
mail: ${dst}\n\n" | ldapmodify -c -H ldap://${LDAP_IP} -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" -x -w ${LDAP_ADMIN_PASSWORD}
fi
else
echo "Forward vers plusieurs adresses, on met dans le fichier"
@@ -215,7 +216,7 @@ replace: mailAlias\n\
mailAlias: ${src}\n\
-\n\
replace: mail\n\
${LIST}\n\n" | ldapmodify -c -H ldap://${LDAP_IP} -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -x -w ${ldap_LDAP_ADMIN_PASSWORD}
${LIST}\n\n" | ldapmodify -c -H ldap://${LDAP_IP} -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" -x -w ${LDAP_ADMIN_PASSWORD}
fi
done

View File

@@ -5,16 +5,17 @@ KAZ_ROOT=/kaz
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
. $KAZ_KEY_DIR/env-ldapServ
. $KAZ_KEY_DIR/env-nextcloudDB
LDAP_IP=$(docker inspect -f '{{.NetworkSettings.Networks.ldapNet.IPAddress}}' ldapServ)
docker exec -i nextcloudDB mysql --user=${nextcloud_MYSQL_USER} --password=${nextcloud_MYSQL_PASSWORD} ${nextcloud_MYSQL_DATABASE} <<< "select uid from oc_users;" > /tmp/nc_users.txt
docker exec -i nextcloudDB mysql --user=${MYSQL_USER} --password=${MYSQL_PASSWORD} ${MYSQL_DATABASE} <<< "select uid from oc_users;" > /tmp/nc_users.txt
OLDIFS=${IFS}
IFS=$'\n'
for line in `cat /tmp/nc_users.txt`; do
result=$(ldapsearch -h $LDAP_IP -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -w ${ldap_LDAP_ADMIN_PASSWORD} -b $ldap_root -x "(identifiantKaz=${line})" | grep numEntries)
result=$(ldapsearch -h $LDAP_IP -D "cn=${LDAP_ADMIN_USERNAME},${ldap_root}" -w ${LDAP_ADMIN_PASSWORD} -b $ldap_root -x "(identifiantKaz=${line})" | grep numEntries)
echo "${line} ${result}" | grep -v "numEntries: 1" | grep -v "^uid"
done
IFS=${OLDIFS}

15
bin/lib/config.py Normal file
View File

@@ -0,0 +1,15 @@
DOCKERS_ENV = "/kaz/config/dockers.env"
SECRETS = "/kaz/secret/env-{serv}"
def getDockersConfig(key):
with open(DOCKERS_ENV) as config:
for line in config:
if line.startswith(f"{key}="):
return line.split("=", 1)[1].split("#")[0].strip()
def getSecretConfig(serv, key):
with open(SECRETS.format(serv=serv)) as config:
for line in config:
if line.startswith(f"{key}="):
return line.split("=", 2)[1].split("#")[0].strip()

101
bin/lib/ldap.py Normal file
View File

@@ -0,0 +1,101 @@
import ldap
from passlib.hash import sha512_crypt
from email_validator import validate_email, EmailNotValidError
import subprocess
from .config import getDockersConfig, getSecretConfig
class Ldap:
def __init__(self):
self.ldap_connection = None
self.ldap_root = getDockersConfig("ldap_root")
self.ldap_admin_username = getSecretConfig("ldapServ", "LDAP_ADMIN_USERNAME")
self.ldap_admin_password = getSecretConfig("ldapServ", "LDAP_ADMIN_PASSWORD")
cmd="docker inspect -f '{{.NetworkSettings.Networks.ldapNet.IPAddress}}' ldapServ"
self.ldap_host = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT).strip().decode()
def __enter__(self):
self.ldap_connection = ldap.initialize(f"ldap://{self.ldap_host}")
self.ldap_connection.simple_bind_s("cn={},{}".format(self.ldap_admin_username, self.ldap_root), self.ldap_admin_password)
return self
def __exit__(self, tp, e, traceback):
self.ldap_connection.unbind_s()
def get_email(self, email):
"""
Vérifier si un utilisateur avec cet email existe dans le LDAP soit comme mail principal soit comme alias
"""
# Créer une chaîne de filtre pour rechercher dans les champs "cn" et "mailAlias"
filter_str = "(|(cn={})(mailAlias={}))".format(email, email)
result = self.ldap_connection.search_s("ou=users,{}".format(self.ldap_root), ldap.SCOPE_SUBTREE, filter_str)
return result
def delete_user(self, email):
"""
Supprimer un utilisateur du LDAP par son adresse e-mail
"""
try:
# Recherche de l'utilisateur
result = self.ldap_connection.search_s("ou=users,{}".format(self.ldap_root), ldap.SCOPE_SUBTREE, "(cn={})".format(email))
if not result:
return False # Utilisateur non trouvé
# Récupération du DN de l'utilisateur
dn = result[0][0]
# Suppression de l'utilisateur
self.ldap_connection.delete_s(dn)
return True # Utilisateur supprimé avec succès
except ldap.NO_SUCH_OBJECT:
return False # Utilisateur non trouvé
except ldap.LDAPError as e:
return False # Erreur lors de la suppression
def create_user(self, email, prenom, nom, password, email_secours, quota):
"""
Créer une nouvelle entrée dans le LDAP pour un nouvel utilisateur. QUESTION: A QUOI SERVENT PRENOM/NOM/IDENT_KAZ DANS LE LDAP ? POURQUOI 3 QUOTA ?
"""
password_chiffre = sha512_crypt.hash(password)
if not validate_email(email) or not validate_email(email_secours):
return False
if self.get_email(email):
return False
# Construire le DN
dn = f"cn={email},ou=users,{self.ldap_root}"
mod_attrs = [
('objectClass', [b'inetOrgPerson', b'PostfixBookMailAccount', b'nextcloudAccount', b'kaznaute']),
('sn', f'{prenom} {nom}'.encode('utf-8')),
('mail', email.encode('utf-8')),
('mailEnabled', b'TRUE'),
('mailGidNumber', b'5000'),
('mailHomeDirectory', f"/var/mail/{email.split('@')[1]}/{email.split('@')[0]}/".encode('utf-8')),
('mailQuota', f'{quota}G'.encode('utf-8')),
('mailStorageDirectory', f"maildir:/var/mail/{email.split('@')[1]}/{email.split('@')[0]}/".encode('utf-8')),
('mailUidNumber', b'5000'),
('mailDeSecours', email_secours.encode('utf-8')),
('identifiantKaz', f'{prenom.lower()}.{nom.lower()}'.encode('utf-8')),
('quota', str(quota).encode('utf-8')),
('nextcloudEnabled', b'TRUE'),
('nextcloudQuota', f'{quota} GB'.encode('utf-8')),
('mobilizonEnabled', b'TRUE'),
('agoraEnabled', b'TRUE'),
('userPassword', f'{{CRYPT}}{password_chiffre}'.encode('utf-8')),
('cn', email.encode('utf-8'))
]
self.ldap_connection.add_s(dn, mod_attrs)
return True

134
bin/lib/mattermost.py Normal file
View File

@@ -0,0 +1,134 @@
import subprocess
from .config import getDockersConfig, getSecretConfig
mattermost_user = getSecretConfig("mattermostServ", "MM_ADMIN_USER")
mattermost_pass = getSecretConfig("mattermostServ", "MM_ADMIN_PASSWORD")
mattermost_url = f"https://{getDockersConfig('matterHost')}.{getDockersConfig('domain')}"
mmctl = "docker exec -i mattermostServ bin/mmctl"
class Mattermost:
def __init__(self):
pass
def __enter__(self):
self.authenticate()
return self
def __exit__(self, tp, e, traceback):
self.logout()
def authenticate(self):
# Authentification sur MM
cmd = f"{mmctl} auth login {mattermost_url} --name local-server --username {mattermost_user} --password {mattermost_pass}"
subprocess.run(cmd, shell=True, stderr=subprocess.STDOUT, check=True)
def logout(self):
# Authentification sur MM
cmd = f"{mmctl} auth clean"
subprocess.run(cmd, shell=True, stderr=subprocess.STDOUT, check=True)
def post_message(self, message, equipe="kaz", canal="creation-comptes"):
"""
Envoyer un message dans une Equipe/Canal de MM
"""
cmd = f"{mmctl} post create {equipe}:{canal} --message \"{message}\""
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
return output.decode()
def get_user(self, user):
"""
Le user existe t-il sur MM ?
"""
try:
cmd = f"{mmctl} user search {user} --json"
user_list_output = subprocess.check_output(cmd, shell=True)
return True # Le nom d'utilisateur existe
except subprocess.CalledProcessError:
return False
def create_user(self, user, email, password):
"""
Créer un utilisateur sur MM
"""
cmd = f"{mmctl} user create --email {email} --username {user} --password {password}"
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
return output.decode()
def delete_user(self, email):
"""
Supprimer un utilisateur sur MM
"""
cmd = f"{mmctl} user delete {email} --confirm"
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
return output.decode()
def update_password(self, email, new_password):
"""
Changer un password pour un utilisateur de MM
"""
cmd = f"{mmctl} user change-password {email} --password {new_password}"
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
return output.decode()
def add_user_to_team(self, email, equipe):
"""
Affecte un utilisateur à une équipe MM
"""
cmd = f"{mmctl} team users add {equipe} {email}"
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
return output.decode()
def add_user_to_channel(self, email, equipe, canal):
"""
Affecte un utilisateur à un canal MM
"""
cmd = f'{mmctl} channel users add {equipe}:{canal} {email}'
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
return output.decode()
def get_teams(self):
"""
Lister les équipes sur MM
"""
cmd = f"{mmctl} team list --disable-pager"
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
data_list = output.decode("utf-8").strip().split('\n')
data_list.pop()
return data_list
def create_team(self, equipe, email):
"""
Créer une équipe sur MM et affecter un admin si email est renseigné (set admin marche pô)
"""
#DANGER: l'option --email ne rend pas le user admin de l'équipe comme c'est indiqué dans la doc :(
cmd = f"{mmctl} team create --name {equipe} --display-name {equipe} --private --email {email}"
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
#Workaround: on récup l'id du user et de l'équipe pour affecter le rôle "scheme_admin": true, "scheme_user": true avec l'api MM classique.
#TODO:
return output.decode()
def delete_team(self, equipe):
"""
Supprimer une équipe sur MM
"""
cmd = f"{mmctl} team delete {equipe} --confirm"
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
return output.decode()

134
bin/lib/paheko.py Normal file
View File

@@ -0,0 +1,134 @@
import re
import requests
from .config import getDockersConfig, getSecretConfig
paheko_ident = getDockersConfig("paheko_API_USER")
paheko_pass = getDockersConfig("paheko_API_PASSWORD")
paheko_auth = (paheko_ident, paheko_pass)
paheko_url = f"https://kaz-paheko.{getDockersConfig('domain')}"
class Paheko:
def get_categories(self):
"""
Récupérer les catégories Paheko avec le compteur associé
"""
api_url = paheko_url + '/api/user/categories'
response = requests.get(api_url, auth=paheko_auth)
if response.status_code == 200:
data = response.json()
return data
else:
return None
def get_users_in_categorie(self,categorie):
"""
Afficher les membres d'une catégorie Paheko
"""
if not categorie.isdigit():
return 'Id de category non valide', 400
api_url = paheko_url + '/api/user/category/'+categorie+'.json'
response = requests.get(api_url, auth=paheko_auth)
if response.status_code == 200:
data = response.json()
return data
else:
return None
def get_user(self,ident):
"""
Afficher un membre de Paheko par son email kaz ou son numéro ou le non court de l'orga
"""
emailmatchregexp = re.compile(r"^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+$")
if emailmatchregexp.match(ident):
data = { "sql": f"select * from users where email='{ident}' or alias = '{ident}'" }
api_url = paheko_url + '/api/sql/'
response = requests.post(api_url, auth=paheko_auth, data=data)
#TODO: if faut Rechercher count et vérifier que = 1 et supprimer le count=1 dans la réponse
elif ident.isdigit():
api_url = paheko_url + '/api/user/'+ident
response = requests.get(api_url, auth=paheko_auth)
else:
nomorga = re.sub(r'\W+', '', ident) # on vire les caractères non alphanumérique
data = { "sql": f"select * from users where admin_orga=1 and nom_orga='{nomorga}'" }
api_url = paheko_url + '/api/sql/'
response = requests.post(api_url, auth=paheko_auth, data=data)
#TODO:if faut Rechercher count et vérifier que = 1 et supprimer le count=1 dans la réponse
if response.status_code == 200:
data = response.json()
if data["count"] == 1:
return data["results"][0]
elif data["count"] == 0:
return None
else:
return data["results"]
else:
return None
def set_user(self,ident,field,new_value):
"""
Modifie la valeur d'un champ d'un membre paheko (ident= numéro paheko ou email kaz)
"""
#récupérer le numero paheko si on fournit un email kaz
emailmatchregexp = re.compile(r"^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+$")
if emailmatchregexp.match(ident):
data = { "sql": f"select id from users where email='{ident}'" }
api_url = paheko_url + '/api/sql/'
response = requests.post(api_url, auth=paheko_auth, data=data)
if response.status_code == 200:
#on extrait l'id de la réponse
data = response.json()
if data['count'] == 0:
print("email non trouvé")
return None
elif data['count'] > 1:
print("trop de résultat")
return None
else:
#OK
ident = data['results'][0]['id']
else:
print("pas de résultat")
return None
elif not ident.isdigit():
print("Identifiant utilisateur invalide")
return None
regexp = re.compile("[^a-zA-Z0-9 \\r\\n\\t" + re.escape(string.punctuation) + "]")
valeur = regexp.sub('',new_value) # mouais, il faudrait être beaucoup plus précis ici en fonction des champs qu'on accepte...
champ = re.sub(r'\W+','',field) # pas de caractères non alphanumériques ici, dans l'idéal, c'est à choisir dans une liste plutot
api_url = paheko_url + '/api/user/'+str(ident)
payload = {champ: valeur}
response = requests.post(api_url, auth=paheko_auth, data=payload)
return response.json()
def get_users_with_action(self, action):
"""
retourne tous les membres de paheko avec une action à mener (création du compte kaz / modification...)
"""
api_url = paheko_url + '/api/sql/'
payload = { "sql": f"select * from users where action_auto='{action}'" }
response = requests.post(api_url, auth=paheko_auth, data=payload)
if response.status_code == 200:
return response.json()
else:
return None

40
bin/lib/sympa.py Normal file
View File

@@ -0,0 +1,40 @@
import subprocess
from email_validator import validate_email, EmailNotValidError
from .config import getDockersConfig, getSecretConfig
sympa_user = getSecretConfig("sympaServ", "SOAP_USER")
sympa_pass = getSecretConfig("sympaServ", "SOAP_PASSWORD")
sympa_listmaster = getSecretConfig("sympaServ", "ADMINEMAIL")
sympa_url = f"https://{getDockersConfig('sympaHost')}.{getDockersConfig('domain')}"
sympa_soap = "docker exec -i sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl"
sympa_domain = getDockersConfig('domain_sympa')
sympa_liste_info = "infos"
# docker exec -i sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${sympa_SOAP_USER} --trusted_application_password=${sympa_SOAP_PASSWORD} --proxy_vars=\"USER_EMAIL=${LISTMASTER}\" --service=add --service_parameters=\"${NL_LIST},${EMAIL_SOUHAITE}\"" | tee -a "${CMD_SYMPA}"
class Sympa:
def _execute_sympa_command(self, email, liste, service):
if validate_email(email) and validate_email(liste):
cmd = f'{sympa_soap} --soap_url={sympa_url}/sympasoap --trusted_application={sympa_user} --trusted_application_password={sympa_pass} --proxy_vars=USER_EMAIL={sympa_listmaster} --service={service} --service_parameters="{liste},{email}" && echo $?'
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
return output.decode()
def add_email_to_list(self, email, liste=sympa_liste_info):
"""
Ajouter un email dans une liste sympa
"""
output = self._execute_sympa_command(email, f"{liste}@{sympa_domain}", 'add')
return output
def delete_email_from_list(self, email, liste=sympa_liste_info):
"""
Supprimer un email dans une liste sympa
"""
output = self._execute_sympa_command(email, f"{liste}@{sympa_domain}", 'del')
return output

8
bin/lib/template.py Normal file
View File

@@ -0,0 +1,8 @@
import jinja2
templateLoader = jinja2.FileSystemLoader(searchpath="../templates")
templateEnv = jinja2.Environment(loader=templateLoader)
def render_template(filename, args):
template = templateEnv.get_template(filename)
return template.render(args)

213
bin/lib/user.py Normal file
View File

@@ -0,0 +1,213 @@
from email_validator import validate_email, EmailNotValidError
from glob import glob
import tempfile
import subprocess
import re
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
import smtplib
from .paheko import Paheko
from .ldap import Ldap
from .mattermost import Mattermost
from .sympa import Sympa
from .template import render_template
from .config import getDockersConfig, getSecretConfig
DEFAULT_FILE = "/kaz/tmp/createUser.txt"
webmail_url = f"https://webmail.{getDockersConfig('domain')}"
mattermost_url = f"https://agora.{getDockersConfig('domain')}"
mdp_url = f"https://mdp.{getDockersConfig('domain')}"
sympa_url = f"https://listes.{getDockersConfig('domain')}"
site_url = f"https://{getDockersConfig('domain')}"
cloud_url = f"https://cloud.{getDockersConfig('domain')}"
def _generate_password(self):
cmd="apg -n 1 -m 10 -M NCL -d"
output = subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
new_password="_"+output.decode("utf-8")+"_"
return new_password
def create_user(email, email_secours, admin_orga, nom_orga, quota_disque, nom, prenom, nc_orga, garradin_orga, wp_orga, agora_orga, wiki_orga, nc_base, groupe_nc_base, equipe_agora, password=None):
email = email.lower()
with Ldap() as ldap:
# est-il déjà dans le ldap ? (mail ou alias)
if ldap.get_email(email):
print(f"ERREUR 1: {email} déjà existant dans ldap. on arrête tout")
return None
#test nom orga
if admin_orga == 1:
if nom_orga is None:
print(f"ERREUR 0 sur paheko: {email} : nom_orga vide, on arrête tout")
return
if not bool(re.match(r'^[a-z0-9-]+$', nom_orga)):
print(f"ERREUR 0 sur paheko: {email} : nom_orga ({tab['nom_orga']}) incohérent (minuscule/chiffre/-), on arrête tout")
return
#test email_secours
email_secours = email_secours.lower()
if not validate_email(email_secours):
print("Mauvais email de secours")
return
#test quota
quota = quota_disque
if not quota.isdigit():
print(f"ERREUR 2: quota non numérique : {quota}, on arrête tout")
return
#on génère un password
password = password or _generate_password()
#on créé dans le ldap
#à quoi servent prenom/nom dans le ldap ?
data = {
"prenom": prenom,
"nom": nom,
"password": password,
"email_secours": email_secours,
"quota": quota
}
if not ldap.create_user(email, **data):
print("Erreur LDAP")
return
with Mattermost() as mm:
#on créé dans MM
user = email.split('@')[0]
mm.create_user(user, email, password)
mm.add_user_to_team(email, "kaz")
#et aux 2 canaux de base
mm.add_user_to_channel(email, "kaz", "une-question--un-soucis")
mm.add_user_to_channel(email, "kaz", "cafe-du-commerce--ouvert-2424h")
#on créé une nouvelle équipe ds MM si besoin
if admin_orga == 1:
mm.create_team(nom_orga, email)
#BUG: créer la nouvelle équipe n'a pas rendu l'email admin, on le rajoute comme membre simple
mm.add_user_to_team(email, nom_orga)
#on inscrit email et email_secours à la nl sympa_liste_info
sympa = Sympa()
sympa.add_email_to_list(email)
sympa.add_email_to_list(email_secours)
#on construit/envoie le mail
context = {
'ADMIN_ORGA': admin_orga,
'NOM': f"{prenom} {nom}",
'EMAIL_SOUHAITE': email,
'PASSWORD': password,
'QUOTA': quota_disque,
'URL_WEBMAIL': webmail_url,
'URL_AGORA': mattermost_url,
'URL_MDP': mdp_url,
'URL_LISTE': sympa_url,
'URL_SITE': site_url,
'URL_CLOUD': cloud_url,
}
html = render_template("email_inscription.html", context)
raw = render_template("email_inscription.txt", context)
message = MIMEMultipart()
message["Subject"] = "KAZ: confirmation d'inscription !"
message["From"] = f"contact@{getDockersConfig('domain')}"
message["To"] = f"{email}, {email_secours}"
message.attach(MIMEText(raw, "plain"))
message.attach(MIMEText(html, "html"))
with smtplib.SMTP(f"mail.{getDockersConfig('domain')}", 25) as server:
server.sendmail(f"contact@{getDockersConfig('domain')}", [email,email_secours], message.as_string())
#on met le flag paheko action à Aucune
paheko = Paheko()
try:
paheko.set_user(email, "action_auto", "Aucune")
except:
print(f"Erreur paheko pour remettre action_auto = Aucune pour {email}")
#on post sur MM pour dire ok
with Mattermost() as mm:
msg=f"**POST AUTO** Inscription réussie pour {email} avec le secours {email_secours} Bisou!"
mm.post_message(message=msg)
def create_waiting_users():
"""
Créé les kaznautes en attente: inscription sur MM / Cloud / email + msg sur MM + email à partir de action="a créer" sur paheko
"""
#verrou pour empêcher de lancer en même temps la même api
prefixe="create_user_lock_"
if glob(f"{tempfile.gettempdir()}/{prefixe}*"):
print("Lock présent")
return None
lock_file = tempfile.NamedTemporaryFile(prefix=prefixe,delete=True)
#qui sont les kaznautes à créer ?
paheko = Paheko()
liste_kaznautes = paheko.get_users_with_action("A créer")
if liste_kaznautes:
count=liste_kaznautes['count']
if count==0:
print("aucun nouveau kaznaute à créer")
return
#au moins un kaznaute à créer
for tab in liste_kaznautes['results']:
create_user(**tab)
print("fin des inscriptions")
def create_users_from_file(file=DEFAULT_FILE):
"""
Créé les kaznautes en attente: inscription sur MM / Cloud / email + msg sur MM + email à partir du ficher
"""
#verrou pour empêcher de lancer en même temps la même api
prefixe="create_user_lock_"
if glob(f"{tempfile.gettempdir()}/{prefixe}*"):
print("Lock présent")
return None
lock_file = tempfile.NamedTemporaryFile(prefix=prefixe,delete=True)
#qui sont les kaznautes à créer ?
liste_kaznautes = []
with open(file) as lines:
for line in lines:
line = line.strip()
if not line.startswith("#") and line != "":
user_data = line.split(';')
user_dict = {
"nom": user_data[0],
"prenom": user_data[1],
"email": user_data[2],
"email_secours": user_data[3],
"nom_orga": user_data[4],
"admin_orga": user_data[5],
"nc_orga": user_data[6],
"garradin_orga": user_data[7],
"wp_orga": user_data[8],
"agora_orga": user_data[9],
"wiki_orga": user_data[10],
"nc_base": user_data[11],
"groupe_nc_base": user_data[12],
"equipe_agora": user_data[13],
"quota_disque": user_data[14],
"password": user_data.get(15),
}
liste_kaznautes.append(user_dict)
if liste_kaznautes:
for tab in liste_kaznautes:
create_user(**tab)
print("fin des inscriptions")

View File

@@ -7,7 +7,6 @@ KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
#GLOBAL VARS
PRG=$(basename $0)
@@ -83,7 +82,8 @@ Init(){
[ $? -ne 0 ] && printKazError "$DockerServName ne parvient pas à démarrer correctement : impossible de terminer l'install" && return 1 >& $QUIET
# creation compte admin
${SIMU} curl -i -d "{\"email\":\"${mattermost_MM_ADMIN_EMAIL}\",\"username\":\"${mattermost_user}\",\"password\":\"${mattermost_pass}\",\"allow_marketing\":true}" "${MATTER_URL}/api/v4/users"
_getPasswords
${SIMU} curl -i -d "{\"email\":\"${MM_ADMIN_EMAIL}\",\"username\":\"${mattermost_user}\",\"password\":\"${mattermost_pass}\",\"allow_marketing\":true}" "${MATTER_URL}/api/v4/users"
MM_TOKEN=$(_getMMToken ${MATTER_URL})
@@ -98,12 +98,13 @@ Version(){
_getMMToken(){
#$1 MATTER_URL
_getPasswords
${SIMU} curl -i -s -d "{\"login_id\":\"${mattermost_user}\",\"password\":\"${mattermost_pass}\"}" "${1}/api/v4/users/login" | grep 'token' | sed 's/token:\s*\(.*\)\s*/\1/' | tr -d '\r'
}
PostMessage(){
printKazMsg "Envoi à $TEAM : $MESSAGE" >& $QUIET
_getPasswords
${SIMU} docker exec -ti "${DockerServName}" bin/mmctl auth login "${MATTER_URL}" --name local-server --username ${mattermost_user} --password ${mattermost_pass}
${SIMU} docker exec -ti "${DockerServName}" bin/mmctl post create "${TEAM}" --message "${MESSAGE}"
}
@@ -113,6 +114,16 @@ MmctlCommand(){
${SIMU} docker exec -u 33 "$DockerServName" bin/mmctl $1
}
_getPasswords(){
# récupération des infos du compte admin
if [ -n "$AGORACOMMUN" ] ; then
. $KAZ_KEY_DIR/env-mattermostAdmin
. $KAZ_KEY_DIR/env-mattermostServ
else
. $KAZ_KEY_DIR/orgas/${ORGA}/env-mattermostAdmin
. $KAZ_KEY_DIR/orgas/$ORGA/env-mattermostServ
fi
}
########## Main #################
for ARG in "$@"; do

View File

@@ -7,7 +7,6 @@ KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
#GLOBAL VARS
PRG=$(basename $0)
@@ -63,11 +62,12 @@ Init(){
cookies=$(curl -c - ${POD_URL})
CSRF_TOKEN=$(curl --cookie <(echo "$cookies") ${POD_URL}/cp-install | grep "csrf_test_name" | sed "s/.*value=.//" | sed "s/.>//")
_getPasswords
#echo ${CSRF_TOKEN}
${SIMU} curl --cookie <(echo "$cookies") -X POST \
-d "username=${castopod_ADMIN_USER}" \
-d "password=${castopod_ADMIN_PASSWORD}" \
-d "email=${castopod_ADMIN_MAIL}" \
-d "username=${ADMIN_USER}" \
-d "password=${ADMIN_PASSWORD}" \
-d "email=${ADMIN_MAIL}" \
-d "csrf_test_name=${CSRF_TOKEN}" \
"${POD_URL}/cp-install/create-superadmin"
@@ -78,7 +78,13 @@ Version(){
echo "Version $DockerServName : ${GREEN}${VERSION}${NC}"
}
_getPasswords(){
if [ -n "$CASTOPOD_COMMUN" ]; then
. $KAZ_KEY_DIR/env-castopodAdmin
else
. $KAZ_KEY_DIR/orgas/$ORGA/env-castopodAdmin
fi
}
########## Main #################
for ARG in "$@"; do

View File

@@ -7,7 +7,6 @@ KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
#GLOBAL VARS
PRG=$(basename $0)
@@ -16,7 +15,7 @@ availableOrga=($(getList "${KAZ_CONF_DIR}/container-orga.list"))
AVAILABLE_ORGAS=${availableOrga[*]//-orga/}
# CLOUD
APPLIS_PAR_DEFAUT="tasks calendar contacts bookmarks mail richdocuments external drawio snappymail ransomware_protection" #rainloop richdocumentscode
APPLIS_PAR_DEFAUT="tasks calendar contacts bookmarks mail richdocuments external drawio ransomware_protection" #rainloop richdocumentscode
QUIET="1"
ONNAS=
@@ -75,7 +74,7 @@ Init(){
CONF_FILE="${NAS_VOL}/orga_${ORGA}-cloudConfig/_data/config.php"
fi
firstInstall "$CLOUD_URL" "$CONF_FILE" " NextCloud de $NOM"
firstInstall "$CLOUD_URL" "$CONF_FILE" "$NOM"
updatePhpConf "$CONF_FILE"
InstallApplis
echo "${CYAN} *** Paramétrage richdocuments pour $ORGA${NC}" >& $QUIET
@@ -100,43 +99,58 @@ firstInstall(){
# $2 phpConfFile
# $3 orga
if ! grep -q "'installed' => true," "$2" 2> /dev/null; then
printKazMsg "\n *** Premier lancement de $3" >& $QUIET
printKazMsg "\n *** Premier lancement nextcloud $3" >& $QUIET
_getPasswords
${SIMU} waitUrl "$1"
${SIMU} curl -X POST \
-d "install=true" \
-d "adminlogin=${nextcloud_NEXTCLOUD_ADMIN_USER}" \
-d "adminpass=${nextcloud_NEXTCLOUD_ADMIN_PASSWORD}" \
-d "adminlogin=${NEXTCLOUD_ADMIN_USER}" \
-d "adminpass=${NEXTCLOUD_ADMIN_PASSWORD}" \
-d "directory=/var/www/html/data" \
-d "dbtype=mysql" \
-d "dbuser=${nextcloud_MYSQL_USER}" \
-d "dbpass=${nextcloud_MYSQL_PASSWORD}" \
-d "dbname=${nextcloud_MYSQL_DATABASE}" \
-d "dbhost=${nextcloud_MYSQL_HOST}" \
-d "dbuser=${MYSQL_USER}" \
-d "dbpass=${MYSQL_PASSWORD}" \
-d "dbname=${MYSQL_DATABASE}" \
-d "dbhost=${MYSQL_HOST}" \
-d "install-recommended-apps=true" \
"$1"
fi
}
setOfficeUrl(){
OFFICE_URL="https://${officeHost}.${domain}"
if [ ! "${site}" = "prod1" ]; then
OFFICE_URL="https://${site}-${officeHost}.${domain}"
_getPasswords(){
if [ -n "$CLOUDCOMMUN" ]; then
. $KAZ_KEY_DIR/env-nextcloudServ
. $KAZ_KEY_DIR/env-nextcloudDB
else
. $KAZ_KEY_DIR/orgas/$ORGA/env-nextcloudServ
. $KAZ_KEY_DIR/orgas/$ORGA/env-nextcloudDB
fi
}
setOfficeUrl(){
# Did le 25 mars les offices sont tous normalisé sur les serveurs https://${site}-${officeHost}.${domain}
#OFFICE_URL="https://${officeHost}.${domain}"
#if [ ! "${site}" = "prod1" ]; then
OFFICE_URL="https://${site}-${officeHost}.${domain}"
#fi
occCommand "config:app:set --value $OFFICE_URL richdocuments public_wopi_url"
occCommand "config:app:set --value $OFFICE_URL richdocuments wopi_url"
occCommand "config:app:set --value $OFFICE_URL richdocuments disable_certificate_verification"
}
initLdap(){
. $KAZ_KEY_DIR/env-ldapServ
# $1 Nom du cloud
echo "${CYAN} *** Installation LDAP pour $1${NC}" >& $QUIET
occCommand "app:enable user_ldap" "${DockerServName}"
occCommand "ldap:delete-config s01" "${DockerServName}"
occCommand "ldap:create-empty-config" "${DockerServName}"
occCommand "ldap:set-config s01 ldapAgentName cn=cloud,ou=applications,${ldap_root}" "${DockerServName}"
occCommand "ldap:set-config s01 ldapAgentPassword ${ldap_LDAP_CLOUD_PASSWORD}" "${DockerServName}"
occCommand "ldap:set-config s01 ldapAgentPassword ${LDAP_CLOUD_PASSWORD}" "${DockerServName}"
occCommand "ldap:set-config s01 ldapBase ${ldap_root}" "${DockerServName}"
occCommand "ldap:set-config s01 ldapBaseGroups ${ldap_root}" "${DockerServName}"
occCommand "ldap:set-config s01 ldapBaseUsers ou=users,${ldap_root}" "${DockerServName}"

View File

@@ -7,7 +7,6 @@ KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
#GLOBAL VARS
PRG=$(basename $0)
@@ -55,15 +54,11 @@ Init(){
PLG_DIR="${VOL_PREFIX}wikiPlugins/_data"
CONF_DIR="${VOL_PREFIX}wikiConf/_data"
# Gael, j'avais ajouté ça mais j'ai pas test alors je laisse comme avant ...
# A charge au prochain qui monte un wiki de faire qque chose
#WIKI_ROOT="${dokuwiki_WIKI_ROOT}"
#WIKI_EMAIL="${dokuwiki_WIKI_EMAIL}"
#WIKI_PASS="${dokuwiki_WIKI_PASSWORD}"
WIKI_ROOT=Kaz
WIKI_EMAIL=wiki@kaz.local
WIKI_PASS=azerty
if [ -n "$WIKICOMMUN" ]; then
. $KAZ_KEY_DIR/env-dokuwiki
else
. $KAZ_KEY_DIR/orgas/$ORGA/env-dokuwiki
fi
${SIMU} checkDockerRunning "${DockerServName}" "${NOM}" || exit
@@ -80,8 +75,8 @@ Init(){
-d "d[superuser]=${WIKI_ROOT}" \
-d "d[fullname]=Admin"\
-d "d[email]=${WIKI_EMAIL}" \
-d "d[password]=${WIKI_PASS}" \
-d "d[confirm]=${WIKI_PASS}" \
-d "d[password]=${WIKI_PASSWORD}" \
-d "d[confirm]=${WIKI_PASSWORD}" \
-d "d[policy]=1" \
-d "d[allowreg]=false" \
-d "d[license]=0" \

View File

@@ -7,7 +7,6 @@ KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
#GLOBAL VARS
PRG=$(basename $0)
@@ -62,10 +61,17 @@ Init(){
${SIMU} waitUrl "${WP_URL}"
if [ -n "$WIKICOMMUN" ]; then
. $KAZ_KEY_DIR/env-wpServ
else
. $KAZ_KEY_DIR/orgas/$ORGA/env-wpServ
fi
${SIMU} curl -X POST \
-d "user_name=${wp_WORDPRESS_ADMIN_USER}" \
-d "admin_password=${wp_WORDPRESS_ADMIN_PASSWORD}" \
-d "admin_password2=${wp_WORDPRESS_ADMIN_PASSWORD}" \
-d "user_name=${WORDPRESS_ADMIN_USER}" \
-d "admin_password=${WORDPRESS_ADMIN_PASSWORD}" \
-d "admin_password2=${WORDPRESS_ADMIN_PASSWORD}" \
-d "pw_weak=true" \
-d "admin_email=admin@kaz.bzh" \
-d "blog_public=0" \

View File

@@ -0,0 +1,68 @@
#!/bin/bash
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
newenvfile=$KAZ_KEY_DIR/env-mattermostAdmin
touch $newenvfile
echo "mattermost_user=$mattermost_user" >> $newenvfile
echo "mattermost_pass=$mattermost_pass" >> $newenvfile
echo "mattermost_token=$mattermost_token" >> $newenvfile
echo "EMAIL_CONTACT=$EMAIL_CONTACT" >> $DOCKERS_ENV
newenvfile=$KAZ_KEY_DIR/env-paheko
touch $newenvfile
echo "API_USER=$paheko_API_USER" >> $newenvfile
echo "API_PASSWORD=$paheko_API_PASSWORD" >> $newenvfile
newenvfile=$KAZ_KEY_DIR/env-mail
touch $newenvfile
echo "service_mail=$service_mail" >> $newenvfile
echo "service_password=$service_password" >> $newenvfile
newenvfile=$KAZ_KEY_DIR/env-borg
# touch $newenvfile à priori il existe déjà
echo "BORG_REPO=$BORG_REPO" >> $newenvfile
echo "BORG_PASSPHRASE=$BORG_PASSPHRASE" >> $newenvfile
echo "VOLUME_SAUVEGARDES=$VOLUME_SAUVEGARDES" >> $newenvfile
echo "MAIL_RAPPORT=$MAIL_RAPPORT" >> $newenvfile
echo "BORGMOUNT=$BORGMOUNT" >> $newenvfile
newenvfile=$KAZ_KEY_DIR/env-traefik
touch $newenvfile
echo "DASHBOARD_USER=$traefik_DASHBOARD_USER" >> $newenvfile
echo "DASHBOARD_PASSWORD=$traefik_DASHBOARD_PASSWORD" >> $newenvfile
#####################
# Castopod
# A COPIER DANS UN FICHIER DE CONF !! castopodAdmin
newenvfile=$KAZ_KEY_DIR/env-castopodAdmin
touch $newenvfile
echo "ADMIN_USER=$castopod_ADMIN_USER" >> $newenvfile
echo "ADMIN_MAIL=$castopod_ADMIN_MAIL" >> $newenvfile
echo "ADMIN_PASSWORD=$castopod_ADMIN_PASSWORD" >> $newenvfile
# creation dossier pour les env des orgas
mkdir $KAZ_KEY_DIR/orgas
orgasLong=($(getList "${KAZ_CONF_DIR}/container-orga.list"))
ORGAS=${orgasLong[*]//-orga/}
for orga in ${ORGAS};do
mkdir $KAZ_KEY_DIR/orgas/$orga
cp $KAZ_KEY_DIR/env-{castopod{Admin,DB,Serv},mattermost{DB,Serv},nextcloud{DB,Serv},spip{DB,Serv},wp{DB,Serv}} $KAZ_KEY_DIR/orgas/$orga
done
echo "C'est parfait, vous pouvez git pull puis supprimer SetAllPass.sh"

View File

@@ -9,7 +9,6 @@ KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
. $KAZ_ROOT/secret/env-kaz
@@ -133,6 +132,7 @@ for orgaLong in ${Orgas}; do
${SIMU} rsync -aAhHX --info=progress2 --delete "${DOCK_VOL_PAHEKO_ORGA}/${orgaCourt}" -e "ssh -p 2201" root@${SITE_DST}.${domain}:"${DOCK_VOL_PAHEKO_ORGA}/"
fi
${SIMU} rsync -aAhHX --info=progress2 --delete ${KAZ_COMP_DIR}/${orgaLong} -e "ssh -p 2201" root@${SITE_DST}.${domain}:${KAZ_COMP_DIR}/
${SIMU} rsync -aAhHX --info=progress2 --delete ${KAZ_KEY_DIR}/orgas/${orgaCourt} -e "ssh -p 2201" root@${SITE_DST}.${domain}:${KAZ_KEY_DIR}/orgas/${orgaCourt}
${SIMU} ssh -p 2201 root@${SITE_DST}.${domain} "grep -q '^${orgaLong}\$' /kaz/config/container-orga.list || echo ${orgaLong} >> /kaz/config/container-orga.list"
${SIMU} ssh -p 2201 root@${SITE_DST}.${domain} ${KAZ_COMP_DIR}/${orgaLong}/init-volume.sh
@@ -143,6 +143,4 @@ for orgaLong in ${Orgas}; do
${SIMU} ssh -p 2201 root@${SITE_DST}.${domain} "${KAZ_BIN_DIR}/manageCloud.sh" --officeURL "${orgaCourt}"
fi
done

View File

@@ -20,8 +20,7 @@ ${SIMU} "${CV1}" stop orga
${SIMU} "${CV1}" stop
${SIMU} rsync "${EV1}/dockers.env" "${EV2}/"
${SIMU} rsync "${SV1}/SetAllPass.sh" "${SV2}/"
${SIMU} "${BV2}/updateDockerPassword.sh"
${SIMU} rsync "${SV1}/" "${SV2}/"
# XXX ? rsync /kaz/secret/allow_admin_ip /kaz-git/secret/allow_admin_ip

View File

@@ -0,0 +1,41 @@
#!/bin/bash
#date: 23/04/2025
#ki: fab
#koi: supprimer de acme.json les certificats LE devenus inutiles
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
. "${DOCKERS_ENV}"
FILE_ACME_ORI="/var/lib/docker/volumes/traefik_letsencrypt/_data/acme.json"
FILE_ACME="/tmp/acme.json"
FILE_URL=$(mktemp)
FILE_ACME_TMP=$(mktemp)
#l'ip du serveur:
#marche po pour les machines hébergée chez T.C... :( on récupère l'IP dans config/dockers.env
#MAIN_IP=$(curl ifconfig.me)
#DANGER: IP depuis config/dockers.env ne fonctionne pas pour les domaines hors *.kaz.bzh (ex:radiokalon.fr)
#sauvegarde
cp $FILE_ACME_ORI $FILE_ACME
cp $FILE_ACME "$FILE_ACME"_$(date +%Y%m%d_%H%M%S)
#je cherche toutes les url
jq -r '.letsencrypt.Certificates[].domain.main' $FILE_ACME > $FILE_URL
while read -r url; do
#echo "Traitement de : $url"
nb=$(dig $url | grep $MAIN_IP | wc -l)
if [ "$nb" -eq 0 ]; then
#absent, on vire de acme.json
echo "on supprime "$url
jq --arg url "$url" 'del(.letsencrypt.Certificates[] | select(.domain.main == $url))' $FILE_ACME > $FILE_ACME_TMP
mv -f $FILE_ACME_TMP $FILE_ACME
fi
done < "$FILE_URL"
echo "si satisfait, remettre "$FILE_ACME" dans "$FILE_ACME_ORI

View File

@@ -4,12 +4,12 @@ KAZ_ROOT=/kaz
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
URL_AGORA=https://$matterHost.$domain/api/v4
EQUIPE=kaz
PostMattermost() {
. $KAZ_KEY_DIR/env-mattermostAdmin
PostM=$1
CHANNEL=$2
TEAMID=$(curl -s -H "Authorization: Bearer ${mattermost_token}" "${URL_AGORA}/teams/name/${EQUIPE}" | jq .id | sed -e 's/"//g')

View File

@@ -6,7 +6,6 @@ KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
URL_AGORA=$(echo $matterHost).$(echo $domain)
MAX_QUEUE=50
@@ -15,6 +14,8 @@ OLDIFS=$IFS
IFS=" "
COUNT_MAILQ=$(docker exec -t mailServ mailq | tail -n1 | gawk '{print $5}')
# récupération mots de passes
. $KAZ_KEY_DIR/env-mattermostAdmin
docker exec ${mattermostServName} bin/mmctl --suppress-warnings auth login $httpProto://$URL_AGORA --name local-server --username $mattermost_user --password $mattermost_pass >/dev/null 2>&1
if [ "${COUNT_MAILQ}" -gt "${MAX_QUEUE}" ]; then

View File

@@ -1,7 +1,6 @@
#!/bin/bash
# --------------------------------------------------------------------------------------
# Didier
#
# Script de sauvegarde avec BorgBackup
# la commande de creation du dépot est : borg init --encryption=repokey /mnt/backup-nas1/BorgRepo
# la conf de borg est dans /root/.config/borg
@@ -18,9 +17,13 @@ KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
VERSION="V-10-03-2025"
. ${KAZ_KEY_DIR}/env-borg
# Si la variable SCRIPTBORG est renseignée avec un fichier on le source
if [ ! -z ${SCRIPTBORG} ]
then
[ -f ${SCRIPTBORG} ] && . ${SCRIPTBORG}
fi
VERSION="V-07-08-2025"
PRG=$(basename $0)
RACINE=$(echo $PRG | awk '{print $1}')
#IFS=' '

View File

@@ -3,70 +3,138 @@
KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
. $DOCKERS_ENV
cd "${KAZ_ROOT}"
NEW_DIR="secret"
TMPL_DIR="secret.tmpl"
SORTIESTANDARD=1
DIR=$KAZ_KEY_DIR
ORGA=
if [ ! -d "${NEW_DIR}/" ]; then
rsync -a "${TMPL_DIR}/" "${NEW_DIR}/"
fi
NEW_FILE="${NEW_DIR}/SetAllPass-new.sh"
TMPL_FILE="${NEW_DIR}/SetAllPass.sh"
usage() {
echo "${PRG} [OPTIONS] [filename ...]
# PARCOURE LES ENV FILE ET REMPLIT LES --clean_val-- qui n'ont pas été complétés.
on cherche des
@@pass@@***@@p@@ -> on génère un mot de passe 16car (les *** permettent d'identifier le mot de passe, s'il doit être utilisé ailleurs)
@@db@@***@@d@@ -> on génère une base de données (pareil identifié par ***)
@@user@@***@@u@@ -> on génère un user
@@token@@***@@t@@ -> on génère un token
@@globalvar@@***@@gv@@ -> on cherche la variable globale ***
@@crossvar@@envname_varname@@cv@@ -> on retrouve la variable dans les envfiles
while read line ; do
if [[ "${line}" =~ ^# ]] || [ -z "${line}" ] ; then
echo "${line}"
continue
fi
if [[ "${line}" =~ "--clean_val--" ]] ; then
case "${line}" in
*jirafeau_DATA_DIR*)
JIRAFEAU_DIR=$(getValInFile "${DOCKERS_ENV}" "jirafeauDir")
[ -z "${JIRAFEAU_DIR}" ] &&
echo "${line}" ||
sed "s%\(.*\)--clean_val--\(.*\)%\1${JIRAFEAU_DIR}\2%" <<< ${line}
continue
;;
*DATABASE*)
dbName="$(sed "s/\([^_]*\)_.*/\1/" <<< ${line})_$(apg -n 1 -m 2 -M NCL | cut -c 1-2)"
sed "s/\(.*\)--clean_val--\(.*\)/\1${dbName}\2/" <<< ${line}
continue
;;
*ROOT_PASSWORD*|*PASSWORD*)
pass="$(apg -n 1 -m 16 -M NCL)"
sed "s/\(.*\)--clean_val--\(.*\)/\1${pass}\2/" <<< ${line}
continue
;;
*USER*)
user="$(sed "s/\([^_]*\)_.*/\1/" <<< ${line})_$(apg -n 1 -m 2 -M NCL | cut -c 1-2)"
sed "s/\(.*\)--clean_val--\(.*\)/\1${user}\2/" <<< ${line}
continue
;;
*RAIN_LOOP*|*office_password*|*mattermost_*|*sympa_*|*gitea_*)
pass="$(apg -n 1 -m 16 -M NCL)"
sed "s/\(.*\)--clean_val--\(.*\)/\1${pass}\2/" <<< ${line}
continue
;;
*vaultwarden_ADMIN_TOKEN*)
pass="$(apg -n 1 -m 32 -M NCL)"
sed "s/\(.*\)--clean_val--\(.*\)/\1${pass}\2/" <<< ${line}
continue
;;
esac
Si on précise des fichiers, alors il ne remplace que dans ceux là (et on "lie" les clean-val ensemble !!!)
OPTIONS
-h|--help Cette aide :-)
-n|--simu SIMULATION
-q|--quiet Sans bruits de fond
-d foldername prend les envfiles dans un sous dossier /kaz/secret/orgas/foldername/ (pour les orgas !)
-
"
}
for ARG in "$@"; do
if [ -n "${DIRECTORYARG}" ]; then # après un -d
DIR=$KAZ_KEY_DIR/orgas/${ARG}
ORGA=${ARG}
DIRECTORYARG=
else
echo "${line}"
continue
case "${ARG}" in
'-d' | '--directory' | '-f' | '--folder' | '--foldername')
DIRECTORYARG="ON ATTEND UN REPERTOIRE APRES CA" ;;
'-h' | '--help' )
usage && exit ;;
'-n' | '--simu')
SIMU="echo" ;;
'-q' | '--quiet')
SORTIESTANDARD="/dev/null" ;;
*)
ENVFILES="${ENVFILES} ${ARG%}";;
esac
fi
printKazError "${line}" >&2
done < "${TMPL_FILE}" > "${NEW_FILE}"
done
mv "${NEW_FILE}" "${TMPL_FILE}"
NB_FILES=$(echo "${ENVFILES}" | wc -w )
chmod a+x "${TMPL_FILE}"
. "${TMPL_FILE}"
"${KAZ_BIN_DIR}/updateDockerPassword.sh"
if [[ $NB_FILES = 0 ]]; then
ENVFILES=$(grep -lE '@@pass@@|@@db@@|@@user@@|@@token@@|@@globalvar@@|@@crossvar@@' $DIR/* | sed 's/.*\///') #
fi
secretGen(){
# $1 Le env-file à compléter
FILENAME=$DIR/$1
NBMATCH=$(grep -lE '@@pass@@|@@db@@|@@user@@|@@token@@|@@globalvar@@' $FILENAME | wc -l) # est ce qu'il y a des choses à génrérer
if [[ $NBMATCH = 0 ]]; then
true
# rien à faire dans ce fichier, on passe
else
echo "Remplissage $FILENAME" >& $SORTIESTANDARD
db="$(apg -n 1 -m 2 -M NCL | cut -c 1-2)"
pass="$(apg -n 1 -m 16 -M NCL)"
token="$(apg -n 1 -m 32 -M NCL)"
user="$(apg -n 1 -m 2 -M NCL | cut -c 1-2)"
dbs=$(grep -Eo '@@db@@[^@]*@@d@@' $FILENAME | sed -e 's/@@db@@//' -e 's/@@d@@//')
passwords=$(grep -Eo '@@pass@@[^@]*@@p@@' $FILENAME | sed -e 's/@@pass@@//' -e 's/@@p@@//')
tokens=$(grep -Eo '@@token@@[^@]*@@t@@' $FILENAME | sed -e 's/@@token@@//' -e 's/@@t@@//')
users=$(grep -Eo '@@user@@[^@]*@@u@@' $FILENAME | sed -e 's/@@user@@//' -e 's/@@u@@//')
globalvars=$(grep -Eo '@@globalvar@@[^@]*@@gv@@' $FILENAME | sed -e 's/@@globalvar@@//' -e 's/@@gv@@//')
for dbName in $dbs; do $SIMU sed -i "s/@@db@@$dbName@@d@@/${dbName}_$db/" $DIR/*; done
for pw in $passwords; do $SIMU sed -i "s/@@pass@@$pw@@p@@/${pass}/" $DIR/*; done
for tk in $tokens; do $SIMU sed -i "s/@@token@@$tk@@t@@/${token}/" $DIR/*; done
for u in $users; do $SIMU sed -i "s/@@user@@$u@@u@@/${u}_$user/" $DIR/*; done
for var in $globalvars; do $SIMU sed -i "s/@@globalvar@@$var@@gv@@/${!var}/" $DIR/*; done
fi
}
crossVarComplete(){
# $1 Le env-file à compléter
FILENAME=$DIR/$1
NBMATCH=$(grep -lE '@@crossvar@@' $FILENAME | wc -l) # est ce qu'il y a des cross-var à récupérer
if [[ $NBMATCH = 0 ]]; then
true
# rien à faire dans ce fichier, on passe
else
echo "Remplissage $FILENAME" >& $SORTIESTANDARD
. $$DIR/env-$envname
varnames=$(grep -Eo '@@crossvar@@[^@]*@@cv@@' $FILENAME | sed -e 's/@@crossvar@@//' -e 's/@@cv@@//')
for varname in $varnames; do
envname=${varname%%_*}
$SIMU sed -i "s/@@crossvar@@$varname@@cv@@/${!varname}/" $DIR/*;
done
fi
}
for ENVFILE in $ENVFILES; do
secretGen "$ENVFILE"
done
for ENVFILE in $ENVFILES; do
crossVarComplete "$ENVFILE"
done
exit 0

82
bin/templates/email.css Normal file
View File

@@ -0,0 +1,82 @@
body {
font-family: Arial, sans-serif;
background-color: #f4f4f4;
margin: 0;
padding: 0;
}
.email-content {
background-color: #f0f0f0; /* Light gray background */
margin: 20px auto;
padding: 20px;
border: 1px solid #dddddd;
max-width: 600px;
width: 90%; /* This makes the content take 90% width of its container */
text-align: left; /* Remove text justification */
}
header {
background-color: #E16969;
color: white;
text-align: center;
height: 50px; /* Fixed height for header */
line-height: 50px; /* Vertically center the text */
width: 100%; /* Make header full width */
}
footer {
background-color: #E16969;
color: white;
text-align: center;
height: 50px; /* Fixed height for footer */
line-height: 50px; /* Vertically center the text */
width: 100%; /* Make footer full width */
}
.header-container {
position: relative; /* Pour positionner le logo et le texte dans le header */
height: 50px; /* Hauteur maximale du header */
}
.logo {
position: absolute; /* Pour positionner le logo */
max-height: 100%; /* Taille maximale du logo égale à la hauteur du header */
top: 0; /* Aligner le logo en haut */
left: 0; /* Aligner le logo à gauche */
margin-right: 10px; /* Marge à droite du logo */
}
.header-container h1, .footer-container p {
margin: 0;
font-size: 24px;
}
.footer-container p {
font-size: 12px;
}
.footer-container a {
color: #FFFFFF; /* White color for links in footer */
text-decoration: none;
}
.footer-container a:hover {
text-decoration: underline; /* Optional: add underline on hover */
}
a {
color: #E16969; /* Same color as header/footer background for all other links */
text-decoration: none;
}
a:hover {
text-decoration: underline; /* Optional: add underline on hover */
}
h2 {
color: #E16969;
}
p {
line-height: 1.6;
}

View File

@@ -0,0 +1,9 @@
<footer>
<div class="footer-container">
<p>
Ici, on prend soin de vos données et on ne les vend pas !
<br>
<a href="https://kaz.bzh">https://kaz.bzh</a>
</p>
</div>
</footer>

View File

@@ -0,0 +1,6 @@
<header>
<div class="header-container">
<img class="logo" src="https://kaz-cloud.kaz.bzh/apps/theming/image/logo?v=33" alt="KAZ Logo">
<h1>Kaz : Le numérique sobre, libre, éthique et local</h1>
</div>
</header>

View File

@@ -0,0 +1,94 @@
<!DOCTYPE html>
<html lang="fr">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Email d'inscription'</title>
<style>
{% include 'email.css' %}
</style>
</head>
<body>
{% include 'email_header.html' %}
<div class="email-content">
<p>
Bonjour {{NOM}}!<br><br>
Bienvenue chez KAZ!<br><br>
Vous disposez de :
<ul>
<li>une messagerie classique : <a href={{URL_WEBMAIL}}>{{URL_WEBMAIL}}</a></li>
<li>une messagerie instantanée pour discuter au sein d'équipes : <a href={{URL_AGORA}}>{{URL_AGORA}}</a></li>
</ul>
Votre email et identifiant pour ces services : {{EMAIL_SOUHAITE}}<br>
Le mot de passe : <b>{{PASSWORD}}</b><br><br>
Pour changer votre mot de passe de messagerie, c'est ici: <a href={{URL_MDP}}>{{URL_MDP}}</a><br>
Si vous avez perdu votre mot de passe, c'est ici: <a href={{URL_MDP}}/?action=sendtoken>{{URL_MDP}}/?action=sendtoken</a><br><br>
Vous pouvez accéder à votre messagerie classique:
<ul>
<li>soit depuis votre webmail : <a href={{URL_WEBMAIL}}>{{URL_WEBMAIL}}</a></li>
<li>soit depuis votre bureau virtuel : <a href={{URL_CLOUD}}>{{URL_CLOUD}}</a></li>
<li>soit depuis un client de messagerie comme thunderbird<br>
</ul>
</p>
{% if ADMIN_ORGA == '1' %}
<p>
En tant qu'association/famille/société. Vous avez la possibilité d'ouvrir, quand vous le voulez, des services kaz, il vous suffit de nous le demander.<br><br>
Pourquoi n'ouvrons-nous pas tous les services tout de suite ? parce que nous aimons la sobriété et que nous préservons notre espace disque ;)<br>
A quoi sert d'avoir un site web si on ne l'utilise pas, n'est-ce pas ?<br><br>
Par retour de mail, dites-nous de quoi vous avez besoin tout de suite entre:
<ul>
<li>une comptabilité : un service de gestion adhérents/clients</li>
<li>un site web de type WordPress</li>
<li>un cloud : bureau virtuel pour stocker des fichiers/calendriers/contacts et partager avec vos connaissances</li>
</ul>
Une fois que vous aurez répondu à ce mail, votre demande sera traitée manuellement.
</p>
{% endif %}
<p>
Vous avez quelques docs intéressantes sur le wiki de kaz:
<ul>
<li>Migrer son site internet wordpress vers kaz : <a href="https://wiki.kaz.bzh/wordpress/start#migrer_son_site_wordpress_vers_kaz">https://wiki.kaz.bzh/wordpress/start#migrer_son_site_wordpress_vers_kaz</a></li>
<li>Migrer sa messagerie vers kaz : <a href="https://wiki.kaz.bzh/messagerie/gmail/start">https://wiki.kaz.bzh/messagerie/gmail/start</a></li>
<li>Démarrer simplement avec son cloud : <a href="https://wiki.kaz.bzh/nextcloud/start">https://wiki.kaz.bzh/messagerie/gmail/start</a></li>
</ul>
Votre quota est de {{QUOTA}}GB. Si vous souhaitez plus de place pour vos fichiers ou la messagerie, faites-nous signe !<br><br>
Pour accéder à la messagerie instantanée et communiquer avec les membres de votre équipe ou ceux de kaz : <a href={{URL_AGORA}}/login>{{URL_AGORA}}/login</a><br>
</p>
{% if ADMIN_ORGA == '1' %}
<p>
Comme administrateur de votre organisation, vous pouvez créer des listes de diffusion en vous rendant sur <a href={{URL_LISTE}}>{{URL_LISTE}}</a><br>
</p>
{% endif %}
<p>
Enfin, vous disposez de tous les autres services KAZ où l'authentification n'est pas nécessaire : <a href={{URL_SITE}}>{{URL_SITE}}</a><br><br>
En cas de soucis, n'hésitez pas à poser vos questions sur le canal 'Une question ? un soucis' de l'agora dispo ici : <a href={{URL_AGORA}}>{{URL_AGORA}}</a><br><br>
Si vous avez besoin d'accompagnement pour votre site, votre cloud, votre compta, votre migration de messagerie,...<br>nous proposons des formations mensuelles gratuites. Si vous souhaitez être accompagné par un professionnel, nous pouvons vous donner une liste de pros, référencés par KAZ.<br><br>
À bientôt 😉<br><br>
La collégiale de KAZ.<br>
</p>
</div> <!-- <div class="email-content"> -->
{% include 'email_footer.html' %}
</body>
</html>

View File

@@ -0,0 +1,70 @@
Bonjour {{NOM}}!
Bienvenue chez KAZ!<br><br>
Vous disposez de :
<ul>
<li>une messagerie classique : <a href={{URL_WEBMAIL}}>{{URL_WEBMAIL}}</a></li>
<li>une messagerie instantanée pour discuter au sein d'équipes : <a href={{URL_AGORA}}>{{URL_AGORA}}</a></li>
</ul>
Votre email et identifiant pour ces services : {{EMAIL_SOUHAITE}}<br>
Le mot de passe : <b>{{PASSWORD}}</b><br><br>
Pour changer votre mot de passe de messagerie, c'est ici: <a href={{URL_MDP}}>{{URL_MDP}}</a><br>
Si vous avez perdu votre mot de passe, c'est ici: <a href={{URL_MDP}}/?action=sendtoken>{{URL_MDP}}/?action=sendtoken</a><br><br>
Vous pouvez accéder à votre messagerie classique:
<ul>
<li>soit depuis votre webmail : <a href={{URL_WEBMAIL}}>{{URL_WEBMAIL}}</a></li>
<li>soit depuis votre bureau virtuel : <a href={{URL_CLOUD}}>{{URL_CLOUD}}</a></li>
<li>soit depuis un client de messagerie comme thunderbird<br>
</ul>
</p>
{% if ADMIN_ORGA == '1' %}
<p>
En tant qu'association/famille/société. Vous avez la possibilité d'ouvrir, quand vous le voulez, des services kaz, il vous suffit de nous le demander.<br><br>
Pourquoi n'ouvrons-nous pas tous les services tout de suite ? parce que nous aimons la sobriété et que nous préservons notre espace disque ;)<br>
A quoi sert d'avoir un site web si on ne l'utilise pas, n'est-ce pas ?<br><br>
Par retour de mail, dites-nous de quoi vous avez besoin tout de suite entre:
<ul>
<li>une comptabilité : un service de gestion adhérents/clients</li>
<li>un site web de type WordPress</li>
<li>un cloud : bureau virtuel pour stocker des fichiers/calendriers/contacts et partager avec vos connaissances</li>
</ul>
Une fois que vous aurez répondu à ce mail, votre demande sera traitée manuellement.
</p>
{% endif %}
<p>
Vous avez quelques docs intéressantes sur le wiki de kaz:
<ul>
<li>Migrer son site internet wordpress vers kaz : <a href="https://wiki.kaz.bzh/wordpress/start#migrer_son_site_wordpress_vers_kaz">https://wiki.kaz.bzh/wordpress/start#migrer_son_site_wordpress_vers_kaz</a></li>
<li>Migrer sa messagerie vers kaz : <a href="https://wiki.kaz.bzh/messagerie/gmail/start">https://wiki.kaz.bzh/messagerie/gmail/start</a></li>
<li>Démarrer simplement avec son cloud : <a href="https://wiki.kaz.bzh/nextcloud/start">https://wiki.kaz.bzh/messagerie/gmail/start</a></li>
</ul>
Votre quota est de {{QUOTA}}GB. Si vous souhaitez plus de place pour vos fichiers ou la messagerie, faites-nous signe !<br><br>
Pour accéder à la messagerie instantanée et communiquer avec les membres de votre équipe ou ceux de kaz : <a href={{URL_AGORA}}/login>{{URL_AGORA}}/login</a><br>
</p>
{% if ADMIN_ORGA == '1' %}
<p>
Comme administrateur de votre organisation, vous pouvez créer des listes de diffusion en vous rendant sur <a href={{URL_LISTE}}>{{URL_LISTE}}</a><br>
</p>
{% endif %}
<p>
Enfin, vous disposez de tous les autres services KAZ où l'authentification n'est pas nécessaire : <a href={{URL_SITE}}>{{URL_SITE}}</a><br><br>
En cas de soucis, n'hésitez pas à poser vos questions sur le canal 'Une question ? un soucis' de l'agora dispo ici : <a href={{URL_AGORA}}>{{URL_AGORA}}</a><br><br>
Si vous avez besoin d'accompagnement pour votre site, votre cloud, votre compta, votre migration de messagerie,...<br>nous proposons des formations mensuelles gratuites. Si vous souhaitez être accompagné par un professionnel, nous pouvons vous donner une liste de pros, référencés par KAZ.<br><br>
À bientôt 😉<br><br>
La collégiale de KAZ.<br>

View File

@@ -1,121 +0,0 @@
#!/bin/bash
KAZ_ROOT=$(cd $(dirname $0)/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
# pour mise au point
# SIMU=echo
# Améliorations à prévoir
# - donner en paramètre les services concernés (pour limité les modifications)
# - pour les DB si on déclare un nouveau login, alors les privilèges sont créé mais les anciens pas révoqués
. "${DOCKERS_ENV}"
. "${KAZ_KEY_DIR}/SetAllPass.sh"
updateEnvDB(){
# $1 = prefix
# $2 = envName
# $3 = containerName of DB
rootPass="$1_MYSQL_ROOT_PASSWORD"
dbName="$1_MYSQL_DATABASE"
userName="$1_MYSQL_USER"
userPass="$1_MYSQL_PASSWORD"
${SIMU} sed -i \
-e "s/MYSQL_ROOT_PASSWORD=.*/MYSQL_ROOT_PASSWORD=${!rootPass}/g" \
-e "s/MYSQL_DATABASE=.*/MYSQL_DATABASE=${!dbName}/g" \
-e "s/MYSQL_USER=.*/MYSQL_USER=${!userName}/g" \
-e "s/MYSQL_PASSWORD=.*/MYSQL_PASSWORD=${!userPass}/g" \
"$2"
# seulement si pas de mdp pour root
# pb oeuf et poule (il faudrait les anciennes valeurs) :
# * si rootPass change, faire à la main
# * si dbName change, faire à la main
checkDockerRunning "$3" "$3" || return
echo "change DB pass on docker $3"
echo "grant all privileges on ${!dbName}.* to '${!userName}' identified by '${!userPass}';" | \
docker exec -i $3 bash -c "mysql --user=root --password=${!rootPass}"
}
updateEnv(){
# $1 = prefix
# $2 = envName
for varName in $(grep "^[a-zA-Z_]*=" $2 | sed "s/^\([^=]*\)=.*/\1/g")
do
srcName="$1_${varName}"
srcVal=$(echo "${!srcName}" | sed -e "s/[&]/\\\&/g")
${SIMU} sed -i \
-e "s%^[ ]*${varName}=.*\$%${varName}=${srcVal}%" \
"$2"
done
}
framadateUpdate(){
[[ "${COMP_ENABLE}" =~ " framadate " ]] || return
if [ ! -f "${DOCK_LIB}/volumes/framadate_dateConfig/_data/config.php" ]; then
return 0
fi
checkDockerRunning "${framadateServName}" "Framadate" &&
${SIMU} docker exec -ti "${framadateServName}" bash -c -i "htpasswd -bc /var/framadate/admin/.htpasswd ${framadate_HTTPD_USER} ${framadate_HTTPD_PASSWORD}"
${SIMU} sed -i \
-e "s/^#*const DB_USER[ ]*=.*$/const DB_USER= '${framadate_MYSQL_USER}';/g" \
-e "s/^#*const DB_PASSWORD[ ]*=.*$/const DB_PASSWORD= '${framadate_MYSQL_PASSWORD}';/g" \
"${DOCK_LIB}/volumes/framadate_dateConfig/_data/config.php"
}
jirafeauUpdate(){
[[ "${COMP_ENABLE}" =~ " jirafeau " ]] || return
if [ ! -f "${DOCK_LIB}/volumes/jirafeau_fileConfig/_data/config.local.php" ]; then
return 0
fi
SHA=$(echo -n "${jirafeau_HTTPD_PASSWORD}" | sha256sum | cut -d \ -f 1)
${SIMU} sed -i \
-e "s/'admin_password'[ ]*=>[ ]*'[^']*'/'admin_password' => '${SHA}'/g" \
"${DOCK_LIB}/volumes/jirafeau_fileConfig/_data/config.local.php"
}
####################
# main
updateEnvDB "etherpad" "${KAZ_KEY_DIR}/env-${etherpadDBName}" "${etherpadDBName}"
updateEnvDB "framadate" "${KAZ_KEY_DIR}/env-${framadateDBName}" "${framadateDBName}"
updateEnvDB "gitea" "${KAZ_KEY_DIR}/env-${gitDBName}" "${gitDBName}"
updateEnvDB "mattermost" "${KAZ_KEY_DIR}/env-${mattermostDBName}" "${mattermostDBName}"
updateEnvDB "nextcloud" "${KAZ_KEY_DIR}/env-${nextcloudDBName}" "${nextcloudDBName}"
updateEnvDB "roundcube" "${KAZ_KEY_DIR}/env-${roundcubeDBName}" "${roundcubeDBName}"
updateEnvDB "sympa" "${KAZ_KEY_DIR}/env-${sympaDBName}" "${sympaDBName}"
updateEnvDB "vigilo" "${KAZ_KEY_DIR}/env-${vigiloDBName}" "${vigiloDBName}"
updateEnvDB "wp" "${KAZ_KEY_DIR}/env-${wordpressDBName}" "${wordpressDBName}"
updateEnvDB "vaultwarden" "${KAZ_KEY_DIR}/env-${vaultwardenDBName}" "${vaultwardenDBName}"
updateEnvDB "castopod" "${KAZ_KEY_DIR}/env-${castopodDBName}" "${castopodDBName}"
updateEnv "apikaz" "${KAZ_KEY_DIR}/env-${apikazServName}"
updateEnv "ethercalc" "${KAZ_KEY_DIR}/env-${ethercalcServName}"
updateEnv "etherpad" "${KAZ_KEY_DIR}/env-${etherpadServName}"
updateEnv "framadate" "${KAZ_KEY_DIR}/env-${framadateServName}"
updateEnv "gandi" "${KAZ_KEY_DIR}/env-gandi"
updateEnv "gitea" "${KAZ_KEY_DIR}/env-${gitServName}"
updateEnv "jirafeau" "${KAZ_KEY_DIR}/env-${jirafeauServName}"
updateEnv "mattermost" "${KAZ_KEY_DIR}/env-${mattermostServName}"
updateEnv "nextcloud" "${KAZ_KEY_DIR}/env-${nextcloudServName}"
updateEnv "office" "${KAZ_KEY_DIR}/env-${officeServName}"
updateEnv "roundcube" "${KAZ_KEY_DIR}/env-${roundcubeServName}"
updateEnv "vigilo" "${KAZ_KEY_DIR}/env-${vigiloServName}"
updateEnv "wp" "${KAZ_KEY_DIR}/env-${wordpressServName}"
updateEnv "ldap" "${KAZ_KEY_DIR}/env-${ldapServName}"
updateEnv "sympa" "${KAZ_KEY_DIR}/env-${sympaServName}"
updateEnv "mail" "${KAZ_KEY_DIR}/env-${smtpServName}"
updateEnv "mobilizon" "${KAZ_KEY_DIR}/env-${mobilizonServName}"
updateEnv "mobilizon" "${KAZ_KEY_DIR}/env-${mobilizonDBName}"
updateEnv "vaultwarden" "${KAZ_KEY_DIR}/env-${vaultwardenServName}"
updateEnv "castopod" "${KAZ_KEY_DIR}/env-${castopodServName}"
updateEnv "ldap" "${KAZ_KEY_DIR}/env-${ldapUIName}"
framadateUpdate
jirafeauUpdate
exit 0

View File

@@ -12,7 +12,6 @@ setKazVars
cd $(dirname $0)/..
. "${DOCKERS_ENV}"
. "${KAZ_KEY_DIR}/SetAllPass.sh"
DOCK_DIR=$KAZ_COMP_DIR

View File

@@ -1,2 +1,2 @@
proxy
#traefik
# proxy
traefik

View File

@@ -4,7 +4,7 @@ dokuwiki
paheko
gitea
jirafeau
mattermost
#mattermost
roundcube
mobilizon
vaultwarden

View File

@@ -4,3 +4,4 @@ collabora
etherpad
web
imapsync
spip

View File

@@ -93,13 +93,15 @@ vaultwardenHost=koffre
traefikHost=dashboard
imapsyncHost=imapsync
castopodHost=pod
spipHost=spip
mastodonHost=masto
apikazHost=apikaz
snappymailHost=snappymail
########################################
# ports internes
matterPort=8000
matterPort=8065
imapsyncPort=8080
apikaz=5000
@@ -147,9 +149,18 @@ ldapUIName=ldapUI
imapsyncServName=imapsyncServ
castopodDBName=castopodDB
castopodServName=castopodServ
mastodonServName=mastodonServ
spipDBName=spipDB
spipServName=spipServ
mastodonDBName=mastodonDB
apikazServName=apikazServ
########################################
# services activés par container.sh
# variables d'environneements utilisées
# pour le tmpl du mandataire (proxy)
##################
#qui on envoi le mail d'inscription ?
EMAIL_CONTACT="toto@kaz.bzh"

View File

@@ -1,58 +0,0 @@
FROM alpine:3.17
# Some ENV variables
ENV PATH="/mattermost/bin:${PATH}"
#ENV MM_VERSION=5.32.0
ENV MM_VERSION=6.1.0
ENV MM_INSTALL_TYPE=docker
# Build argument to set Mattermost edition
ARG edition=enterprise
ARG PUID=2000
ARG PGID=2000
ARG MM_BINARY=
# Install some needed packages
RUN apk add --no-cache \
ca-certificates \
curl \
jq \
libc6-compat \
libffi-dev \
libcap \
linux-headers \
mailcap \
netcat-openbsd \
xmlsec-dev \
tzdata \
&& rm -rf /tmp/*
# Get Mattermost
RUN mkdir -p /mattermost/data /mattermost/plugins /mattermost/client/plugins \
&& if [ ! -z "$MM_BINARY" ]; then curl $MM_BINARY | tar -xvz ; \
elif [ "$edition" = "team" ] ; then curl https://releases.mattermost.com/$MM_VERSION/mattermost-team-$MM_VERSION-linux-amd64.tar.gz?src=docker-app | tar -xvz ; \
else curl https://releases.mattermost.com/$MM_VERSION/mattermost-$MM_VERSION-linux-amd64.tar.gz?src=docker-app | tar -xvz ; fi \
&& cp /mattermost/config/config.json /config.json.save \
&& rm -rf /mattermost/config/config.json \
&& addgroup -g ${PGID} mattermost \
&& adduser -D -u ${PUID} -G mattermost -h /mattermost -D mattermost \
&& chown -R mattermost:mattermost /mattermost /config.json.save /mattermost/plugins /mattermost/client/plugins \
&& setcap cap_net_bind_service=+ep /mattermost/bin/mattermost
USER mattermost
#Healthcheck to make sure container is ready
HEALTHCHECK CMD curl --fail http://localhost:8000 || exit 1
# Configure entrypoint and command
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
WORKDIR /mattermost
CMD ["mattermost"]
# Expose port 8000 of the container
EXPOSE 8000
# Declare volumes for mount point directories
VOLUME ["/mattermost/data", "/mattermost/logs", "/mattermost/config", "/mattermost/plugins", "/mattermost/client/plugins"]

View File

@@ -1,82 +0,0 @@
#!/bin/sh
# Function to generate a random salt
generate_salt() {
tr -dc 'a-zA-Z0-9' < /dev/urandom | fold -w 48 | head -n 1
}
# Read environment variables or set default values
DB_HOST=${DB_HOST:-db}
DB_PORT_NUMBER=${DB_PORT_NUMBER:-5432}
# see https://www.postgresql.org/docs/current/libpq-ssl.html
# for usage when database connection requires encryption
# filenames should be escaped if they contain spaces
# i.e. $(printf %s ${MY_ENV_VAR:-''} | jq -s -R -r @uri)
# the location of the CA file can be set using environment var PGSSLROOTCERT
# the location of the CRL file can be set using PGSSLCRL
# The URL syntax for connection string does not support the parameters
# sslrootcert and sslcrl reliably, so use these PostgreSQL-specified variables
# to set names if using a location other than default
DB_USE_SSL=${DB_USE_SSL:-disable}
MM_DBNAME=${MM_DBNAME:-mattermost}
MM_CONFIG=${MM_CONFIG:-/mattermost/config/config.json}
_1=$(echo "$1" | awk '{ s=substr($0, 0, 1); print s; }' )
if [ "$_1" = '-' ]; then
set -- mattermost "$@"
fi
if [ "$1" = 'mattermost' ]; then
# Check CLI args for a -config option
for ARG in "$@"; do
case "$ARG" in
-config=*) MM_CONFIG=${ARG#*=};;
esac
done
if [ ! -f "$MM_CONFIG" ]; then
# If there is no configuration file, create it with some default values
echo "No configuration file $MM_CONFIG"
echo "Creating a new one"
# Copy default configuration file
cp /config.json.save "$MM_CONFIG"
# Substitute some parameters with jq
jq '.ServiceSettings.ListenAddress = ":8000"' "$MM_CONFIG" > "$MM_CONFIG.tmp" && mv "$MM_CONFIG.tmp" "$MM_CONFIG"
jq '.LogSettings.EnableConsole = true' "$MM_CONFIG" > "$MM_CONFIG.tmp" && mv "$MM_CONFIG.tmp" "$MM_CONFIG"
jq '.LogSettings.ConsoleLevel = "ERROR"' "$MM_CONFIG" > "$MM_CONFIG.tmp" && mv "$MM_CONFIG.tmp" "$MM_CONFIG"
jq '.FileSettings.Directory = "/mattermost/data/"' "$MM_CONFIG" > "$MM_CONFIG.tmp" && mv "$MM_CONFIG.tmp" "$MM_CONFIG"
jq '.FileSettings.EnablePublicLink = true' "$MM_CONFIG" > "$MM_CONFIG.tmp" && mv "$MM_CONFIG.tmp" "$MM_CONFIG"
jq ".FileSettings.PublicLinkSalt = \"$(generate_salt)\"" "$MM_CONFIG" > "$MM_CONFIG.tmp" && mv "$MM_CONFIG.tmp" "$MM_CONFIG"
jq '.EmailSettings.SendEmailNotifications = false' "$MM_CONFIG" > "$MM_CONFIG.tmp" && mv "$MM_CONFIG.tmp" "$MM_CONFIG"
jq '.EmailSettings.FeedbackEmail = ""' "$MM_CONFIG" > "$MM_CONFIG.tmp" && mv "$MM_CONFIG.tmp" "$MM_CONFIG"
jq '.EmailSettings.SMTPServer = ""' "$MM_CONFIG" > "$MM_CONFIG.tmp" && mv "$MM_CONFIG.tmp" "$MM_CONFIG"
jq '.EmailSettings.SMTPPort = ""' "$MM_CONFIG" > "$MM_CONFIG.tmp" && mv "$MM_CONFIG.tmp" "$MM_CONFIG"
jq ".EmailSettings.InviteSalt = \"$(generate_salt)\"" "$MM_CONFIG" > "$MM_CONFIG.tmp" && mv "$MM_CONFIG.tmp" "$MM_CONFIG"
jq ".EmailSettings.PasswordResetSalt = \"$(generate_salt)\"" "$MM_CONFIG" > "$MM_CONFIG.tmp" && mv "$MM_CONFIG.tmp" "$MM_CONFIG"
jq '.RateLimitSettings.Enable = true' "$MM_CONFIG" > "$MM_CONFIG.tmp" && mv "$MM_CONFIG.tmp" "$MM_CONFIG"
jq '.SqlSettings.DriverName = "postgres"' "$MM_CONFIG" > "$MM_CONFIG.tmp" && mv "$MM_CONFIG.tmp" "$MM_CONFIG"
jq ".SqlSettings.AtRestEncryptKey = \"$(generate_salt)\"" "$MM_CONFIG" > "$MM_CONFIG.tmp" && mv "$MM_CONFIG.tmp" "$MM_CONFIG"
jq '.PluginSettings.Directory = "/mattermost/plugins/"' "$MM_CONFIG" > "$MM_CONFIG.tmp" && mv "$MM_CONFIG.tmp" "$MM_CONFIG"
else
echo "Using existing config file $MM_CONFIG"
fi
# Configure database access
if [ -z "$MM_SQLSETTINGS_DATASOURCE" ] && [ -n "$MM_USERNAME" ] && [ -n "$MM_PASSWORD" ]; then
echo "Configure database connection..."
# URLEncode the password, allowing for special characters
ENCODED_PASSWORD=$(printf %s "$MM_PASSWORD" | jq -s -R -r @uri)
export MM_SQLSETTINGS_DATASOURCE="postgres://$MM_USERNAME:$ENCODED_PASSWORD@$DB_HOST:$DB_PORT_NUMBER/$MM_DBNAME?sslmode=$DB_USE_SSL&connect_timeout=10"
echo "OK"
else
echo "Using existing database connection"
fi
# Wait another second for the database to be properly started.
# Necessary to avoid "panic: Failed to open sql connection pq: the database system is starting up"
sleep 1
echo "Starting mattermost"
fi
exec "$@"

View File

@@ -4,19 +4,21 @@ services:
#{{db
db:
image: mariadb:11.4
container_name: ${orga}DB
container_name: ${orga}-DB
#disk_quota: 10G
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: ${restartPolicy}
volumes:
- ./initdb.d:/docker-entrypoint-initdb.d:ro
# - ./initdb.d:/docker-entrypoint-initdb.d:ro
- orgaDB:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
environment:
- MARIADB_AUTO_UPGRADE=1
env_file:
- ../../secret/env-${nextcloudDBName}
# - ../../secret/env-${mattermostDBName}
- ../../secret/env-${wordpressDBName}
- ../../secret/orgas/${orga}/env-${nextcloudDBName}
# - ../../secret/orgas/${orga}/env-${mattermostDBName}
- ../../secret/orgas/${orga}/env-${wordpressDBName}
networks:
- orgaNet
healthcheck: # utilisé par init-db.sh pour la créa d'orga
@@ -32,7 +34,7 @@ services:
#{{cloud
cloud:
image: nextcloud
container_name: ${orga}${nextcloudServName}
container_name: ${orga}-${nextcloudServName}
#disk_quota: 10G
restart: ${restartPolicy}
networks:
@@ -48,8 +50,8 @@ services:
- ${smtpServName}:${smtpHost}
labels:
- "traefik.enable=true"
- "traefik.http.routers.${orga}${nextcloudServName}.rule=Host(`${orga}${cloudHost}.${domain}`){{FOREIGN_NC}}"
- "traefik.http.routers.${orga}${nextcloudServName}.middlewares=nextcloud-redirectregex1@file,nextcloud-redirectregex2@file"
- "traefik.http.routers.${orga}-${nextcloudServName}.rule=Host(`${orga}-${cloudHost}.${domain}`){{FOREIGN_NC}}"
- "traefik.http.routers.${orga}-${nextcloudServName}.middlewares=nextcloud-redirectregex1@file,nextcloud-redirectregex2@file"
volumes:
- cloudMain:/var/www/html
- cloudData:/var/www/html/data
@@ -61,10 +63,10 @@ services:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
env_file:
- ../../secret/env-${nextcloudServName}
- ../../secret/env-${nextcloudDBName}
- ../../secret/orgas/${orga}/env-${nextcloudServName}
- ../../secret/orgas/${orga}/env-${nextcloudDBName}
environment:
- NEXTCLOUD_TRUSTED_DOMAINS=${orga}${cloudHost}.${domain}
- NEXTCLOUD_TRUSTED_DOMAINS=${orga}-${cloudHost}.${domain}
- SMTP_HOST=${smtpHost}
- SMTP_PORT=25
- MAIL_DOMAIN=${domain}
@@ -78,7 +80,7 @@ services:
- edition=team
- PUID=1000
- PGID=1000
container_name: ${orga}${mattermostServName}
container_name: ${orga}-${mattermostServName}
#disk_quota: 10G
restart: ${restartPolicy}
# memory: 1G
@@ -107,20 +109,20 @@ services:
- /etc/timezone:/etc/timezone:ro
- /etc/environment:/etc/environment:ro
env_file:
- ../../secret/env-${mattermostServName}
- ../../secret/orgas/${orga}/env-${mattermostServName}
environment:
- VIRTUAL_HOST=${orga}${matterHost}.${domain}
- VIRTUAL_HOST=${orga}-${matterHost}.${domain}
# in case your config is not in default location
#- MM_CONFIG=/mattermost/config/config.json
labels:
- "traefik.enable=true"
- "traefik.http.routers.${orga}${mattermostServName}.rule=Host(`${orga}${matterHost}.${domain}`)"
- "traefik.http.routers.${orga}-${mattermostServName}.rule=Host(`${orga}-${matterHost}.${domain}`)"
#}}
#{{wp
wordpress:
image: wordpress
container_name: ${orga}${wordpressServName}
container_name: ${orga}-${wordpressServName}
restart: ${restartPolicy}
networks:
- orgaNet
@@ -134,17 +136,17 @@ services:
external_links:
- ${smtpServName}:${smtpHost}.${domain}
env_file:
- ../../secret/env-${wordpressServName}
- ../../secret/orgas/${orga}/env-${wordpressServName}
environment:
- WORDPRESS_SMTP_HOST=${smtpHost}.${domain}
- WORDPRESS_SMTP_PORT=25
# - WORDPRESS_SMTP_USERNAME
# - WORDPRESS_SMTP_PASSWORD
# - WORDPRESS_SMTP_FROM=${orga}
- WORDPRESS_SMTP_FROM_NAME=${orga}
# - WORDPRESS_SMTP_FROM=${orga}-
- WORDPRESS_SMTP_FROM_NAME=${orga}-
labels:
- "traefik.enable=true"
- "traefik.http.routers.${orga}${wordpressServName}.rule=Host(`${orga}${wordpressHost}.${domain}`){{FOREIGN_WP}}"
- "traefik.http.routers.${orga}-${wordpressServName}.rule=Host(`${orga}-${wordpressHost}.${domain}`){{FOREIGN_WP}}"
volumes:
- wordpress:/var/www/html
# - ../../config/orgaTmpl/wp:/usr/local/bin/wp:ro
@@ -152,12 +154,12 @@ services:
#{{wiki
dokuwiki:
image: mprasil/dokuwiki
container_name: ${orga}${dokuwikiServName}
container_name: ${orga}-${dokuwikiServName}
#disk_quota: 10G
restart: ${restartPolicy}
labels:
- "traefik.enable=true"
- "traefik.http.routers.${orga}${dokuwikiServName}.rule=Host(`${orga}${dokuwikiHost}.${domain}`){{FOREIGN_DW}}"
- "traefik.http.routers.${orga}-${dokuwikiServName}.rule=Host(`${orga}-${dokuwikiHost}.${domain}`){{FOREIGN_DW}}"
volumes:
- wikiData:/dokuwiki/data
- wikiConf:/dokuwiki/conf
@@ -173,7 +175,7 @@ services:
#{{castopod
castopod:
image: castopod/castopod:latest
container_name: ${orga}${castopodServName}
container_name: ${orga}-${castopodServName}
#disk_quota: 10G
restart: ${restartPolicy}
# memory: 1G
@@ -191,29 +193,55 @@ services:
volumes:
- castopodMedia:/var/www/castopod/public/media
environment:
CP_BASEURL: "https://${orga}${castopodHost}.${domain}"
CP_BASEURL: "https://${orga}-${castopodHost}.${domain}"
CP_ANALYTICS_SALT: qldsgfliuzrbhgmkjbdbmkvb
VIRTUAL_PORT: 8000
CP_CACHE_HANDLER: redis
CP_REDIS_HOST: redis
CP_DATABASE_HOSTNAME: db
env_file:
- ../../secret/env-${castopodServName}
- ../../secret/env-${castopodDBName}
- ../../secret/orgas/${orga}/env-${castopodServName}
- ../../secret/orgas/${orga}/env-${castopodDBName}
labels:
- "traefik.enable=true"
- "traefik.http.routers.${orga}${castopodServName}.rule=Host(`${orga}${castopodHost}.${domain}`){{FOREIGN_POD}}"
- "traefik.http.routers.${orga}-${castopodServName}.rule=Host(`${orga}-${castopodHost}.${domain}`){{FOREIGN_POD}}"
redis:
image: redis:7.0-alpine
container_name: ${orga}castopodCache
container_name: ${orga}-castopodCache
volumes:
- castopodCache:/data
networks:
- orgaNet
env_file:
- ../../secret/env-${castopodServName}
- ../../secret/orgas/${orga}/env-${castopodServName}
command: --requirepass ${castopodRedisPassword}
#}}
#{{spip
spip:
container_name: ${orga}-${spipServName}
image: ipeos/spip:4.4
restart: ${restartPolicy}
depends_on:
- db
links:
- db
env_file:
- ../../secret/orgas/${orga}/env-${spipServName}
environment:
- SPIP_AUTO_INSTALL=1
- SPIP_DB_HOST=db
- SPIP_SITE_ADDRESS=https://${orga}-${spipHost}.${domain}
expose:
- 80
labels:
- "traefik.enable=true"
- "traefik.http.routers.${orga}-${spipServName}.rule=Host(`${orga}-${spipHost}.${domain}`){{FOREIGN_SPIP}}"
networks:
- orgaNet
volumes:
- spip:/usr/src/spip
#}}
@@ -223,87 +251,92 @@ volumes:
#{{db
orgaDB:
external: true
name: orga_${orga}orgaDB
name: orga_${orga}-orgaDB
#}}
#{{agora
matterConfig:
external: true
name: orga_${orga}matterConfig
name: orga_${orga}-matterConfig
matterData:
external: true
name: orga_${orga}matterData
name: orga_${orga}-matterData
matterLogs:
external: true
name: orga_${orga}matterLogs
name: orga_${orga}-matterLogs
matterPlugins:
external: true
name: orga_${orga}matterPlugins
name: orga_${orga}-matterPlugins
matterClientPlugins:
external: true
name: orga_${orga}matterClientPlugins
name: orga_${orga}-matterClientPlugins
matterIcons:
external: true
name: matterIcons
#{{cloud
cloudMain:
external: true
name: orga_${orga}cloudMain
name: orga_${orga}-cloudMain
cloudData:
external: true
name: orga_${orga}cloudData
name: orga_${orga}-cloudData
cloudConfig:
external: true
name: orga_${orga}cloudConfig
name: orga_${orga}-cloudConfig
cloudApps:
external: true
name: orga_${orga}cloudApps
name: orga_${orga}-cloudApps
cloudCustomApps:
external: true
name: orga_${orga}cloudCustomApps
name: orga_${orga}-cloudCustomApps
cloudThemes:
external: true
name: orga_${orga}cloudThemes
name: orga_${orga}-cloudThemes
cloudPhp:
external: true
name: orga_${orga}cloudPhp
name: orga_${orga}-cloudPhp
#}}
#{{wiki
wikiData:
external: true
name: orga_${orga}wikiData
name: orga_${orga}-wikiData
wikiConf:
external: true
name: orga_${orga}wikiConf
name: orga_${orga}-wikiConf
wikiPlugins:
external: true
name: orga_${orga}wikiPlugins
name: orga_${orga}-wikiPlugins
wikiLibtpl:
external: true
name: orga_${orga}wikiLibtpl
name: orga_${orga}-wikiLibtpl
wikiLogs:
external: true
name: orga_${orga}wikiLogs
name: orga_${orga}-wikiLogs
#}}
#{{wp
wordpress:
external: true
name: orga_${orga}wordpress
name: orga_${orga}-wordpress
#}}
#{{castopod
castopodMedia:
external: true
name: orga_${orga}castopodMedia
name: orga_${orga}-castopodMedia
castopodCache:
external: true
name: orga_${orga}castopodCache
name: orga_${orga}-castopodCache
#}}
#{{spip
spip:
external: true
name: orga_${orga}-spip
#}}
networks:
orgaNet:
external: true
name: ${orga}orgaNet
name: ${orga}-orgaNet
# postfixNet:
# external:
# name: postfixNet

View File

@@ -4,7 +4,6 @@ KAZ_ROOT=$(cd $(dirname $0)/../..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
. "${DOCKERS_ENV}"
. "${KAZ_KEY_DIR}/SetAllPass.sh"
cd $(dirname $0)
ORGA_DIR="$(basename "$(pwd)")"
@@ -25,51 +24,72 @@ SQL=""
for ARG in "$@"; do
case "${ARG}" in
'cloud' )
. $KAZ_KEY_DIR/orgas/$ORGA/env-nextcloudDB
SQL="$SQL
CREATE DATABASE IF NOT EXISTS ${nextcloud_MYSQL_DATABASE};
CREATE DATABASE IF NOT EXISTS ${MYSQL_DATABASE};
DROP USER IF EXISTS '${nextcloud_MYSQL_USER}';
CREATE USER '${nextcloud_MYSQL_USER}'@'%';
DROP USER IF EXISTS '${MYSQL_USER}';
CREATE USER '${MYSQL_USER}'@'%';
GRANT ALL ON ${nextcloud_MYSQL_DATABASE}.* TO '${nextcloud_MYSQL_USER}'@'%' IDENTIFIED BY '${nextcloud_MYSQL_PASSWORD}';
GRANT ALL ON ${MYSQL_DATABASE}.* TO '${MYSQL_USER}'@'%' IDENTIFIED BY '${MYSQL_PASSWORD}';
FLUSH PRIVILEGES;"
;;
'agora' )
. $KAZ_KEY_DIR/orgas/$ORGA/env-mattermostDB
SQL="$SQL
CREATE DATABASE IF NOT EXISTS ${mattermost_MYSQL_DATABASE};
CREATE DATABASE IF NOT EXISTS ${MYSQL_DATABASE};
DROP USER IF EXISTS '${mattermost_MYSQL_USER}';
CREATE USER '${mattermost_MYSQL_USER}'@'%';
DROP USER IF EXISTS '${MYSQL_USER}';
CREATE USER '${MYSQL_USER}'@'%';
GRANT ALL ON ${mattermost_MYSQL_DATABASE}.* TO '${mattermost_MYSQL_USER}'@'%' IDENTIFIED BY '${mattermost_MYSQL_PASSWORD}';
GRANT ALL ON ${MYSQL_DATABASE}.* TO '${MYSQL_USER}'@'%' IDENTIFIED BY '${MYSQL_PASSWORD}';
FLUSH PRIVILEGES;"
;;
'wp' )
. $KAZ_KEY_DIR/orgas/$ORGA/env-wpDB
SQL="$SQL
CREATE DATABASE IF NOT EXISTS ${wp_MYSQL_DATABASE};
CREATE DATABASE IF NOT EXISTS ${MYSQL_DATABASE};
DROP USER IF EXISTS '${wp_MYSQL_USER}';
CREATE USER '${wp_MYSQL_USER}'@'%';
DROP USER IF EXISTS '${MYSQL_USER}';
CREATE USER '${MYSQL_USER}'@'%';
GRANT ALL ON ${wp_MYSQL_DATABASE}.* TO '${wp_MYSQL_USER}'@'%' IDENTIFIED BY '${wp_MYSQL_PASSWORD}';
GRANT ALL ON ${MYSQL_DATABASE}.* TO '${MYSQL_USER}'@'%' IDENTIFIED BY '${MYSQL_PASSWORD}';
FLUSH PRIVILEGES;"
;;
'castopod' )
. $KAZ_KEY_DIR/orgas/$ORGA/env-castopodDB
SQL="$SQL
CREATE DATABASE IF NOT EXISTS ${castopod_MYSQL_DATABASE};
CREATE DATABASE IF NOT EXISTS ${MYSQL_DATABASE};
DROP USER IF EXISTS '${castopod_MYSQL_USER}';
CREATE USER '${castopod_MYSQL_USER}'@'%';
DROP USER IF EXISTS '${MYSQL_USER}';
CREATE USER '${MYSQL_USER}'@'%';
GRANT ALL ON ${castopod_MYSQL_DATABASE}.* TO '${castopod_MYSQL_USER}'@'%' IDENTIFIED BY '${castopod_MYSQL_PASSWORD}';
GRANT ALL ON ${MYSQL_DATABASE}.* TO '${MYSQL_USER}'@'%' IDENTIFIED BY '${MYSQL_PASSWORD}';
FLUSH PRIVILEGES;"
;;
'spip' )
. $KAZ_KEY_DIR/orgas/$ORGA/env-spipDB
SQL="$SQL
CREATE DATABASE IF NOT EXISTS ${MYSQL_DATABASE};
DROP USER IF EXISTS '${MYSQL_USER}';
CREATE USER '${MYSQL_USER}'@'%';
GRANT ALL ON ${MYSQL_DATABASE}.* TO '${MYSQL_USER}'@'%' IDENTIFIED BY '${MYSQL_PASSWORD}';
FLUSH PRIVILEGES;"
;;
esac
done
echo $SQL | docker exec -i ${ORGA}-DB bash -c "mariadb --user=root --password=${wp_MYSQL_ROOT_PASSWORD}"
echo $SQL | docker exec -i ${ORGA}-DB bash -c "mariadb --user=root --password=${MYSQL_ROOT_PASSWORD}"

View File

@@ -3,37 +3,41 @@
#docker network create postfix_mailNet
#{{db
docker volume create --name=orga_${orga}orgaDB
docker volume create --name=orga_${orga}-orgaDB
#}}
#{{agora
docker volume create --name=orga_${orga}matterConfig
docker volume create --name=orga_${orga}matterData
docker volume create --name=orga_${orga}matterLogs
docker volume create --name=orga_${orga}matterPlugins
docker volume create --name=orga_${orga}matterClientPlugins
docker volume create --name=orga_${orga}-matterConfig
docker volume create --name=orga_${orga}-matterData
docker volume create --name=orga_${orga}-matterLogs
docker volume create --name=orga_${orga}-matterPlugins
docker volume create --name=orga_${orga}-matterClientPlugins
docker volume create --name=matterIcons
#}}
#{{cloud
docker volume create --name=orga_${orga}cloudMain
docker volume create --name=orga_${orga}cloudData
docker volume create --name=orga_${orga}cloudConfig
docker volume create --name=orga_${orga}cloudApps
docker volume create --name=orga_${orga}cloudCustomApps
docker volume create --name=orga_${orga}cloudThemes
docker volume create --name=orga_${orga}cloudPhp
chown 33:33 /var/lib/docker/volumes/orga_${orga}cloud*/_data
docker volume create --name=orga_${orga}-cloudMain
docker volume create --name=orga_${orga}-cloudData
docker volume create --name=orga_${orga}-cloudConfig
docker volume create --name=orga_${orga}-cloudApps
docker volume create --name=orga_${orga}-cloudCustomApps
docker volume create --name=orga_${orga}-cloudThemes
docker volume create --name=orga_${orga}-cloudPhp
chown 33:33 /var/lib/docker/volumes/orga_${orga}-cloud*/_data
#}}
#{{wiki
docker volume create --name=orga_${orga}wikiData
docker volume create --name=orga_${orga}wikiConf
docker volume create --name=orga_${orga}wikiPlugins
docker volume create --name=orga_${orga}wikiLibtpl
docker volume create --name=orga_${orga}wikiLogs
docker volume create --name=orga_${orga}-wikiData
docker volume create --name=orga_${orga}-wikiConf
docker volume create --name=orga_${orga}-wikiPlugins
docker volume create --name=orga_${orga}-wikiLibtpl
docker volume create --name=orga_${orga}-wikiLogs
#}}
#{{wp
docker volume create --name=orga_${orga}wordpress
docker volume create --name=orga_${orga}-wordpress
#}}
#{{castopod
docker volume create --name=orga_${orga}castopodCache
docker volume create --name=orga_${orga}castopodMedia
docker volume create --name=orga_${orga}-castopodCache
docker volume create --name=orga_${orga}-castopodMedia
#}}
#{{spip
docker volume create --name=orga_${orga}-spip
#}}

View File

@@ -1,3 +0,0 @@
CREATE DATABASE IF NOT EXISTS nextcloud;
CREATE DATABASE IF NOT EXISTS mattermost;
CREATE DATABASE IF NOT EXISTS wpdb;

View File

@@ -20,7 +20,7 @@ STAGE_CREATE=
STAGE_INIT=
usage(){
echo "Usage: $0 [-h] [-l] [+/-paheko] [-/+cloud [-/+collabora}]] [+/-agora] [+/-wiki] [+/-wp] [+/-pod] [x{G/M/k}] OrgaName"
echo "Usage: $0 [-h] [-l] [+/-paheko] [-/+cloud [-/+collabora}]] [+/-agora] [+/-wiki] [+/-wp] [+/-pod] [+/-spip] [x{G/M/k}] OrgaName"
echo " -h|--help : this help"
echo " -l|--list : list service"
@@ -34,6 +34,7 @@ usage(){
echo " +/- wiki : on/off wiki"
echo " +/- wp|word* : on/off wp"
echo " +/- casto*|pod : on/off castopod"
echo " +/- spip : on/off spip"
echo " x[GMk] : set quota"
echo " OrgaName : name must contain a-z0-9_\-"
}
@@ -141,6 +142,7 @@ export agora=$(flagInCompose docker-compose.yml agora: off)
export wiki=$(flagInCompose docker-compose.yml dokuwiki: off)
export wp=$(flagInCompose docker-compose.yml wordpress: off)
export castopod=$(flagInCompose docker-compose.yml castopod: off)
export spip=$(flagInCompose docker-compose.yml spip: off)
export db="off"
export services="off"
export paheko=$([[ -f usePaheko ]] && echo "on" || echo "off")
@@ -159,7 +161,7 @@ INITCMD2="--install"
for ARG in "$@"; do
case "${ARG}" in
'-show' )
for i in cloud collabora agora wiki wp castopod db; do
for i in cloud collabora agora wiki wp castopod spip db; do
echo "${i}=${!i}"
done
exit;;
@@ -195,6 +197,9 @@ for ARG in "$@"; do
'-pod' | '-casto'* )
castopod="off"
;;
'-spip' )
spip="off"
;;
'+paheko' )
paheko="on"
;;
@@ -225,6 +230,11 @@ for ARG in "$@"; do
DBaInitialiser="$DBaInitialiser castopod"
INITCMD2="$INITCMD2 -pod"
;;
'+spip' )
spip="on"
DBaInitialiser="$DBaInitialiser spip"
;;
[.0-9]*[GMk] )
quota="${ARG}"
;;
@@ -304,6 +314,13 @@ if [[ "${castopod}" = "on" ]]; then
else
DEL_DOMAIN+="${ORGA}-${castopodHost} "
fi
if [[ "${spip}" = "on" ]]; then
DOMAIN_AREA+=" - ${ORGA}-\${spipServName}:${ORGA}-\${spipHost}.\${domain}\n"
ADD_DOMAIN+="${ORGA}-${spipHost} "
else
DEL_DOMAIN+="${ORGA}-${spipHost} "
fi
DOMAIN_AREA+="}}\n"
if [[ -n "${STAGE_DEFAULT}${STAGE_CREATE}" ]]; then
@@ -358,6 +375,9 @@ update() {
sed "s/\([^ ]*\) ${ORGA};/ \|\| Host(\`\1\`)/" | tr -d "\r\n")
FOREIGN_POD=$(grep " ${ORGA};" "${KAZ_CONF_PROXY_DIR}/pod_kaz_map" 2>/dev/null | \
sed "s/\([^ ]*\) ${ORGA};/ \|\| Host(\`\1\`)/" | tr -d "\r\n")
FOREIGN_SPIP=$(grep " ${ORGA};" "${KAZ_CONF_PROXY_DIR}/spip_kaz_map" 2>/dev/null | \
sed "s/\([^ ]*\) ${ORGA};/ \|\| Host(\`\1\`)/" | tr -d "\r\n")
awk '
BEGIN {cp=1}
/#}}/ {cp=1 ; next};
@@ -371,7 +391,8 @@ update() {
-e "s/{{FOREIGN_NC}}/${FOREIGN_NC}/"\
-e "s/{{FOREIGN_DW}}/${FOREIGN_DW}/"\
-e "s/{{FOREIGN_POD}}/${FOREIGN_POD}/"\
-e "s|\${orga}|${ORGA}-|g"
-e "s/{{FOREIGN_SPIP}}/${FOREIGN_SPIP}/"\
-e "s|\${orga}|${ORGA}|g"
) > "$2"
sed "s/storage_opt:.*/storage_opt: ${quota}/g" -i "$2"
}
@@ -394,13 +415,18 @@ if [[ -n "${STAGE_DEFAULT}${STAGE_CREATE}" ]]; then
ln -sf ../../config/orgaTmpl/orga-gen.sh
ln -sf ../../config/orgaTmpl/orga-rm.sh
ln -sf ../../config/orgaTmpl/init-paheko.sh
ln -sf ../../config/orgaTmpl/initdb.d/
ln -sf ../../config/orgaTmpl/app/
#ln -sf ../../config/orgaTmpl/initdb.d/
#ln -sf ../../config/orgaTmpl/app/
ln -sf ../../config/orgaTmpl/wiki-conf/
ln -sf ../../config/orgaTmpl/reload.sh
ln -sf ../../config/orgaTmpl/init-db.sh
fi
if [ ! -d "${KAZ_KEY_DIR}/orgas/$ORGA/" ]; then
rsync -a "${KAZ_CONF_DIR}/orgaTmpl/secret.tmpl/" "${KAZ_KEY_DIR}/orgas/$ORGA/"
${KAZ_BIN_DIR}/secretGen.sh -d $ORGA
fi
if [[ -n "${STAGE_DEFAULT}${STAGE_CREATE}" ]]; then
# ########## update ${DOCKERS_ENV}
if ! grep -q "proxy_orga=" .env 2> /dev/null
@@ -420,6 +446,7 @@ if [[ -n "${STAGE_DEFAULT}${STAGE_CREATE}" ]]; then
fi
if [[ -n "${STAGE_DEFAULT}${STAGE_CREATE}" ]]; then
# ########## create volume
./init-volume.sh
fi

View File

@@ -40,6 +40,8 @@ remove () {
sed -i -e "/proxy_${ORGA_FLAG}=/d" "${DOCKERS_ENV}"
sed -i -e "/^${ORGA}-orga$/d" "${ORGA_LIST}"
rm -fr "${KAZ_COMP_DIR}/${ORGA}-orga"
rm -fr "${KAZ_KEY_DIR}/orgas/${ORGA}"
exit;;
[Nn]* )

View File

@@ -0,0 +1,3 @@
ADMIN_USER=@@pass@@castopod2@@p@@
ADMIN_MAIL=admin@@@globalvar@@domain@@gv@@
ADMIN_PASSWORD=@@pass@@castopod3@@p@@

View File

@@ -0,0 +1,4 @@
MYSQL_ROOT_PASSWORD=@@pass@@rootdb@@p@@
MYSQL_USER=@@user@@castopod1@@u@@
MYSQL_PASSWORD=@@pass@@castopod1@@p@@
MYSQL_DATABASE=@@db@@castopod1@@d@@

View File

@@ -0,0 +1,7 @@
CP_EMAIL_SMTP_HOST=
CP_EMAIL_FROM=
CP_EMAIL_SMTP_USERNAME=
CP_EMAIL_SMTP_PASSWORD=
CP_EMAIL_SMTP_PORT=
CP_EMAIL_SMTP_CRYPTO=
CP_REDIS_PASSWORD=

View File

@@ -0,0 +1,9 @@
MYSQL_ROOT_PASSWORD=@@pass@@rootdb@@p@@
MYSQL_DATABASE=@@db@@mattermost@@d@@
MYSQL_USER=@@user@@mattermost@@u@@
MYSQL_PASSWORD=@@pass@@mattermost@@p@@
POSTGRES_USER=@@user@@mattermost@@u@@
POSTGRES_PASSWORD=@@pass@@mattermost@@p@@
POSTGRES_DB=@@db@@mattermost@@d@@

View File

@@ -0,0 +1,5 @@
MM_ADMIN_EMAIL=@@globalvar@@matterHost@@gv@@@@@globalvar@@domain@@gv@@
MM_ADMIN_USER=@@user@@mattermost2@@u@@
MM_ADMIN_PASSWORD=@@pass@@mattermost2@@p@@
MM_SQLSETTINGS_DATASOURCE=postgres://@@user@@mattermost@@u@@:@@pass@@mattermost@@p@@@postgres:5432/@@db@@mattermost@@d@@?sslmode=disable&connect_timeout=10

View File

@@ -0,0 +1,8 @@
MYSQL_ROOT_PASSWORD=@@pass@@rootdb@@p@@
MYSQL_DATABASE=@@db@@nextcloud@@d@@
MYSQL_USER=@@user@@nextcloud@@u@@
MYSQL_PASSWORD=@@pass@@nextcloud@@p@@
#NC_MYSQL_USER=
#NC_MYSQL_PASSWORD=

View File

@@ -0,0 +1,5 @@
NEXTCLOUD_ADMIN_USER=@@user@@nextcloudadmin@@u@@
NEXTCLOUD_ADMIN_PASSWORD=@@pass@@nextcloudadmin@@p@@
MYSQL_HOST=db
RAIN_LOOP=@@pass@@rainloop@@p@@

View File

@@ -0,0 +1,4 @@
MYSQL_ROOT_PASSWORD=@@pass@@rootdb@@p@@
MYSQL_DATABASE=@@db@@spip@@d@@
MYSQL_USER=@@user@@spip@@u@@
MYSQL_PASSWORD=@@pass@@spip@@p@@

View File

@@ -0,0 +1,10 @@
SPIP_AUTO_INSTALL=1
SPIP_DB_SERVER=mysql
SPIP_DB_NAME=@@db@@spip@@d@@
SPIP_DB_LOGIN=@@user@@spip@@u@@
SPIP_DB_PASS=@@pass@@spip@@p@@
SPIP_ADMIN_NAME=admin
SPIP_ADMIN_LOGIN=@@user@@spipadmin@@u@@
SPIP_ADMIN_EMAIL=admin@@@globalvar@@domain@@gv@@
SPIP_ADMIN_PASS=@@pass@@spipadmin@@p@@
PHP_TIMEZONE=Europe/Paris

View File

@@ -0,0 +1,4 @@
MYSQL_ROOT_PASSWORD=@@pass@@rootdb@@p@@
MYSQL_DATABASE=@@db@@wp@@d@@
MYSQL_USER=@@user@@wp@@u@@
MYSQL_PASSWORD=@@pass@@wp@@p@@

View File

@@ -0,0 +1,8 @@
# share with wpDB
WORDPRESS_DB_HOST=db:3306
WORDPRESS_ADMIN_USER=@@user@@adminwp@@u@@
WORDPRESS_ADMIN_PASSWORD=@@pass@@adminwp@@p@@
WORDPRESS_DB_NAME=@@db@@wp@@d@@
WORDPRESS_DB_USER=@@user@@wp@@u@@
WORDPRESS_DB_PASSWORD=@@pass@@wp@@p@@

View File

@@ -1,21 +0,0 @@
#proxy_buffering off;
#proxy_set_header X-Forwarded-Host $host:$server_port;
#proxy_set_header X-Forwarded-Server $host;
#XXX pb proxy_set_header Connection $proxy_connection;
proxy_buffers 256 16k;
proxy_buffer_size 16k;
# mattermost
http2_push_preload on; # Enable HTTP/2 Server Push
add_header Strict-Transport-Security max-age=15768000;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
#proxy_hide_header 'x-frame-options';
#proxy_set_header x-frame-options allowall;
proxy_set_header X-Frame-Options SAMEORIGIN;

View File

@@ -0,0 +1,42 @@
services:
cadvisor:
image: gcr.io/cadvisor/cadvisor:v0.52.0
container_name: cadvisor
command:
- "--store_container_labels=false"
- "--whitelisted_container_labels=com.docker.compose.project"
- "--housekeeping_interval=60s"
- "--docker_only=true"
- "--disable_metrics=percpu,sched,tcp,udp,disk,diskIO,hugetlb,referenced_memory,cpu_topology,resctrl"
networks:
- traefikNet
labels:
- "traefik.enable=true"
- "traefik.http.routers.cadvisor-secure.entrypoints=websecure"
- "traefik.http.routers.cadvisor-secure.rule=Host(`cadvisor-${site}.${domain}`)"
#- "traefik.http.routers.grafana-secure.tls=true"
- "traefik.http.routers.cadvisor-secure.service=cadvisor"
- "traefik.http.routers.cadvisor-secure.middlewares=test-adminipallowlist@file"
- "traefik.http.services.cadvisor.loadbalancer.server.port=8080"
- "traefik.docker.network=traefikNet"
# ports:
# - 8098:8080
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
- /dev/disk/:/dev/disk:ro
devices:
- /dev/kmsg
privileged: true
restart: unless-stopped
networks:
traefikNet:
external: true
name: traefikNet

View File

@@ -6,7 +6,6 @@ setKazVars
cd $(dirname $0)
. "${DOCKERS_ENV}"
. "${KAZ_KEY_DIR}/SetAllPass.sh"
"${KAZ_BIN_DIR}/gestContainers.sh" --install -M -castopod

View File

@@ -4,7 +4,6 @@ KAZ_ROOT=$(cd $(dirname $0)/../..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
. "${DOCKERS_ENV}"
. $KAZ_ROOT/secret/SetAllPass.sh
${KAZ_BIN_DIR}/gestContainers.sh --install -M -cloud

View File

@@ -1,102 +0,0 @@
#!/bin/bash
KAZ_ROOT=$(cd $(dirname $0)/../..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
. "${DOCKERS_ENV}"
. $KAZ_ROOT/secret/SetAllPass.sh
#"${KAZ_BIN_DIR}/initCloud.sh"
docker exec -ti -u 33 nextcloudServ /var/www/html/occ app:enable user_ldap
docker exec -ti -u 33 nextcloudServ /var/www/html/occ ldap:delete-config s01
docker exec -ti -u 33 nextcloudServ /var/www/html/occ ldap:create-empty-config
docker exec -ti -u 33 nextcloudServ /var/www/html/occ ldap:set-config s01 ldapAgentName cn=cloud,ou=applications,${ldap_root}
docker exec -ti -u 33 nextcloudServ /var/www/html/occ ldap:set-config s01 ldapAgentPassword ${ldap_LDAP_CLOUD_PASSWORD}
docker exec -ti -u 33 nextcloudServ /var/www/html/occ ldap:set-config s01 ldapAgentPassword ${ldap_LDAP_CLOUD_PASSWORD}
docker exec -ti -u 33 nextcloudServ /var/www/html/occ ldap:set-config s01 ldapBase ${ldap_root}
docker exec -ti -u 33 nextcloudServ /var/www/html/occ ldap:set-config s01 ldapBaseGroups ${ldap_root}
docker exec -ti -u 33 nextcloudServ /var/www/html/occ ldap:set-config s01 ldapBaseUsers ou=users,${ldap_root}
docker exec -ti -u 33 nextcloudServ /var/www/html/occ ldap:set-config s01 ldapExpertUsernameAttr identifiantKaz
docker exec -ti -u 33 nextcloudServ /var/www/html/occ ldap:set-config s01 ldapHost ${ldapServName}
docker exec -ti -u 33 nextcloudServ /var/www/html/occ ldap:set-config s01 ldapPort 389
docker exec -ti -u 33 nextcloudServ /var/www/html/occ ldap:set-config s01 ldapTLS 0
docker exec -ti -u 33 nextcloudServ /var/www/html/occ ldap:set-config s01 ldapLoginFilter "(&(objectclass=nextcloudAccount)(|(cn=%uid)(identifiantKaz=%uid)))"
docker exec -ti -u 33 nextcloudServ /var/www/html/occ ldap:set-config s01 ldapQuotaAttribute nextcloudQuota
docker exec -ti -u 33 nextcloudServ /var/www/html/occ ldap:set-config s01 ldapUserFilter "(&(objectclass=nextcloudAccount)(nextcloudEnabled=TRUE))"
docker exec -ti -u 33 nextcloudServ /var/www/html/occ ldap:set-config s01 ldapUserFilterObjectclass nextcloudAccount
docker exec -ti -u 33 nextcloudServ /var/www/html/occ ldap:set-config s01 ldapEmailAttribute mail
docker exec -ti -u 33 nextcloudServ /var/www/html/occ ldap:set-config s01 ldapUserDisplayName cn
docker exec -ti -u 33 nextcloudServ /var/www/html/occ ldap:set-config s01 ldapUserFilterMode 1
docker exec -ti -u 33 nextcloudServ /var/www/html/occ ldap:set-config s01 ldapConfigurationActive 1
# Dans le mariadb, pour permettre au ldap de reprendre la main : delete from oc_users where uid<>'admin';
# docker exec -i nextcloudDB mysql --user=<user> --password=<password> <db> <<< "delete from oc_users where uid<>'admin';"
# Doc : https://help.nextcloud.com/t/migration-to-ldap-keeping-users-and-data/13205
# Exemple de table/clés :
# +-------------------------------+----------------------------------------------------------+
# | Configuration | s01 |
# +-------------------------------+----------------------------------------------------------+
# | hasMemberOfFilterSupport | 0 |
# | homeFolderNamingRule | |
# | lastJpegPhotoLookup | 0 |
# | ldapAgentName | cn=cloud,ou=applications,dc=kaz,dc=sns |
# | ldapAgentPassword | *** |
# | ldapAttributesForGroupSearch | |
# | ldapAttributesForUserSearch | |
# | ldapBackgroundHost | |
# | ldapBackgroundPort | |
# | ldapBackupHost | |
# | ldapBackupPort | |
# | ldapBase | ou=users,dc=kaz,dc=sns |
# | ldapBaseGroups | ou=users,dc=kaz,dc=sns |
# | ldapBaseUsers | ou=users,dc=kaz,dc=sns |
# | ldapCacheTTL | 600 |
# | ldapConfigurationActive | 1 |
# | ldapConnectionTimeout | 15 |
# | ldapDefaultPPolicyDN | |
# | ldapDynamicGroupMemberURL | |
# | ldapEmailAttribute | mail |
# | ldapExperiencedAdmin | 0 |
# | ldapExpertUUIDGroupAttr | |
# | ldapExpertUUIDUserAttr | |
# | ldapExpertUsernameAttr | uid |
# | ldapExtStorageHomeAttribute | |
# | ldapGidNumber | gidNumber |
# | ldapGroupDisplayName | cn |
# | ldapGroupFilter | |
# | ldapGroupFilterGroups | |
# | ldapGroupFilterMode | 0 |
# | ldapGroupFilterObjectclass | |
# | ldapGroupMemberAssocAttr | |
# | ldapHost | ldap |
# | ldapIgnoreNamingRules | |
# | ldapLoginFilter | (&(|(objectclass=nextcloudAccount))(cn=%uid)) |
# | ldapLoginFilterAttributes | |
# | ldapLoginFilterEmail | 0 |
# | ldapLoginFilterMode | 0 |
# | ldapLoginFilterUsername | 1 |
# | ldapMatchingRuleInChainState | unknown |
# | ldapNestedGroups | 0 |
# | ldapOverrideMainServer | |
# | ldapPagingSize | 500 |
# | ldapPort | 389 |
# | ldapQuotaAttribute | nextcloudQuota |
# | ldapQuotaDefault | |
# | ldapTLS | 0 |
# | ldapUserAvatarRule | default |
# | ldapUserDisplayName | cn |
# | ldapUserDisplayName2 | |
# | ldapUserFilter | (&(objectclass=nextcloudAccount)(nextcloudEnabled=TRUE)) |
# | ldapUserFilterGroups | |
# | ldapUserFilterMode | 1 |
# | ldapUserFilterObjectclass | nextcloudAccount |
# | ldapUuidGroupAttribute | auto |
# | ldapUuidUserAttribute | auto |
# | turnOffCertCheck | 0 |
# | turnOnPasswordChange | 0 |
# | useMemberOfToDetectMembership | 1 |
# +-------------------------------+----------------------------------------------------------+

View File

@@ -27,11 +27,13 @@ services:
- "traefik.docker.network=giteaNet"
db:
image: mariadb:10.5
image: mariadb
container_name: ${gitDBName}
restart: ${restartPolicy}
env_file:
- ../../secret/env-${gitDBName}
environment:
- MARIADB_AUTO_UPGRADE=1
volumes:
- gitDB:/var/lib/mysql
- /etc/timezone:/etc/timezone:ro

View File

@@ -1,7 +1,7 @@
services:
prometheus:
image: prom/prometheus:v2.15.2
image: prom/prometheus:v3.3.0
restart: unless-stopped
container_name: ${prometheusServName}
volumes:
@@ -10,27 +10,27 @@ services:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
command:
- "--web.route-prefix=/"
- "--web.external-url=https://${site}.${domain}/prometheus"
# - "--web.route-prefix=/"
# - "--web.external-url=https://prometheus.${domain}"
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--web.console.libraries=/usr/share/prometheus/console_libraries"
- "--web.console.templates=/usr/share/prometheus/consoles"
networks:
- traefikNet
labels:
- "traefik.enable=true"
- "traefik.http.routers.prometheus-secure.entrypoints=websecure"
- "traefik.http.middlewares.prometheus-stripprefix.stripprefix.prefixes=/prometheus"
- "traefik.http.routers.prometheus-secure.rule=Host(`${site}.${domain}`) && PathPrefix(`/prometheus`)"
# - "traefik.http.routers.prometheus-secure.tls=true"
- "traefik.http.routers.prometheus-secure.middlewares=prometheus-stripprefix,test-adminiallowlist@file,traefik-auth"
- "traefik.http.routers.prometheus-secure.service=prometheus"
- "traefik.http.services.prometheus.loadbalancer.server.port=9090"
- "traefik.docker.network=traefikNet"
# labels:
# - "traefik.enable=true"
# - "traefik.http.routers.prometheus-secure.entrypoints=websecure"
# - "traefik.http.middlewares.prometheus-stripprefix.stripprefix.prefixes=/prometheus"
# - "traefik.http.routers.prometheus-secure.rule=Host(`prometheus.${domain}`)"
# # - "traefik.http.routers.prometheus-secure.tls=true"
# - "traefik.http.routers.prometheus-secure.middlewares=prometheus-stripprefix,test-adminiallowlist@file,traefik-auth"
# - "traefik.http.routers.prometheus-secure.service=prometheus"
# - "traefik.http.services.prometheus.loadbalancer.server.port=9090"
# - "traefik.docker.network=traefikNet"
grafana:
image: grafana/grafana:6.6.1
image: grafana/grafana:11.6.0
restart: unless-stopped
container_name: ${grafanaServName}
volumes:
@@ -48,8 +48,8 @@ services:
- "traefik.enable=true"
- "traefik.http.routers.grafana-secure.entrypoints=websecure"
- "traefik.http.middlewares.grafana-stripprefix.stripprefix.prefixes=/grafana"
- "traefik.http.routers.grafana-secure.rule=Host(`${site}.${domain}`) && PathPrefix(`/grafana`)"
# - "traefik.http.routers.grafana-secure.tls=true"
- "traefik.http.routers.grafana-secure.rule=Host(`grafana.${domain}`)"
#- "traefik.http.routers.grafana-secure.tls=true"
- "traefik.http.routers.grafana-secure.service=grafana"
- "traefik.http.routers.grafana-secure.middlewares=grafana-stripprefix,test-adminipallowlist@file,traefik-auth"
- "traefik.http.services.grafana.loadbalancer.server.port=3000"

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,874 @@
{
"__inputs": [
{
"name": "DS_PROMETHEUS",
"label": "Prometheus",
"description": "",
"type": "datasource",
"pluginId": "prometheus",
"pluginName": "Prometheus"
}
],
"__elements": {},
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "11.6.0"
},
{
"type": "datasource",
"id": "prometheus",
"name": "Prometheus",
"version": "1.0.0"
},
{
"type": "panel",
"id": "stat",
"name": "Stat",
"version": ""
},
{
"type": "panel",
"id": "timeseries",
"name": "Time series",
"version": ""
}
],
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"description": "Docker monitoring with Prometheus and cAdvisor",
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 1,
"id": null,
"links": [
{
"asDropdown": false,
"icon": "external link",
"includeVars": false,
"keepTime": false,
"tags": [],
"targetBlank": true,
"title": "Portainer",
"tooltip": "",
"type": "link",
"url": "https://portainer.kaz.bzh/"
}
],
"panels": [
{
"collapsed": false,
"gridPos": {
"h": 1,
"w": 24,
"x": 0,
"y": 0
},
"id": 8,
"panels": [],
"repeat": "host",
"title": "$host",
"type": "row"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"mappings": [
{
"options": {
"match": "null",
"result": {
"text": "N/A"
}
},
"type": "special"
}
],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "none"
},
"overrides": []
},
"gridPos": {
"h": 3,
"w": 8,
"x": 0,
"y": 1
},
"id": 7,
"maxDataPoints": 100,
"options": {
"colorMode": "none",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "horizontal",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "11.6.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "count(container_last_seen{image!=\"\", host=\"$host\"})",
"intervalFactor": 2,
"legendFormat": "",
"metric": "container_last_seen",
"range": true,
"refId": "A",
"step": 240
}
],
"title": "Running containers",
"transparent": true,
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"mappings": [
{
"options": {
"match": "null",
"result": {
"text": "N/A"
}
},
"type": "special"
}
],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "mbytes"
},
"overrides": []
},
"gridPos": {
"h": 3,
"w": 8,
"x": 8,
"y": 1
},
"id": 5,
"maxDataPoints": 100,
"options": {
"colorMode": "none",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "horizontal",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "11.6.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "sum(container_memory_usage_bytes{image!=\"\", host=\"$host\"})/1024/1024",
"intervalFactor": 2,
"legendFormat": "",
"metric": "container_memory_usage_bytes",
"range": true,
"refId": "A",
"step": 240
}
],
"title": "Total Memory Usage",
"transparent": true,
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"mappings": [
{
"options": {
"match": "null",
"result": {
"text": "N/A"
}
},
"type": "special"
}
],
"max": 100,
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "percent"
},
"overrides": []
},
"gridPos": {
"h": 3,
"w": 8,
"x": 16,
"y": 1
},
"id": 6,
"maxDataPoints": 100,
"options": {
"colorMode": "none",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "horizontal",
"percentChangeColorMode": "standard",
"reduceOptions": {
"calcs": [
"lastNotNull"
],
"fields": "",
"values": false
},
"showPercentChange": false,
"textMode": "auto",
"wideLayout": true
},
"pluginVersion": "11.6.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "sum(rate(container_cpu_user_seconds_total{image!=\"\", host=\"$host\"}[5m]) * 100)",
"intervalFactor": 2,
"legendFormat": "",
"metric": "container_memory_usage_bytes",
"range": true,
"refId": "A",
"step": 240
}
],
"title": "Total CPU Usage",
"transparent": true,
"type": "stat"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 2,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": true,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"links": [
{
"oneClick": false,
"targetBlank": true,
"title": "Portainer host",
"url": "https://portainer.kaz.bzh/#!/${__field.labels.portainer_id}/docker/containers"
},
{
"targetBlank": true,
"title": "Portainer container",
"url": "https://portainer.kaz.bzh/#!/${__field.labels.portainer_id}/docker/containers/${__field.labels.id.21}${__field.labels.id.22}${__field.labels.id.23}${__field.labels.id.24}${__field.labels.id.25}${__field.labels.id.26}${__field.labels.id.27}${__field.labels.id.28}${__field.labels.id.29}${__field.labels.id.30}${__field.labels.id.31}${__field.labels.id.32}"
}
],
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "percent"
},
"overrides": [
{
"__systemRef": "hideSeriesFrom",
"matcher": {
"id": "byNames",
"options": {
"mode": "exclude",
"names": [
"lagalette-orga/lagalette-wpServ"
],
"prefix": "All except:",
"readOnly": true
}
},
"properties": [
{
"id": "custom.hideFrom",
"value": {
"legend": false,
"tooltip": false,
"viz": true
}
}
]
}
]
},
"gridPos": {
"h": 7,
"w": 24,
"x": 0,
"y": 4
},
"id": 2,
"options": {
"legend": {
"calcs": [
"mean",
"lastNotNull"
],
"displayMode": "table",
"placement": "right",
"showLegend": true,
"sortBy": "Mean",
"sortDesc": true
},
"tooltip": {
"hideZeros": false,
"mode": "multi",
"sort": "desc"
}
},
"pluginVersion": "11.6.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "rate(container_cpu_user_seconds_total{image!=\"\", host=\"$host\"}[5m]) * 100",
"intervalFactor": 2,
"legendFormat": "{{container_label_com_docker_compose_project}}/{{name}}",
"metric": "cpu",
"range": true,
"refId": "A",
"step": 10
}
],
"title": "CPU Usage",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 2,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": true,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"links": [
{
"targetBlank": true,
"title": "Portainer host",
"url": "https://portainer.kaz.bzh/#!/${__field.labels.portainer_id}/docker/containers"
},
{
"targetBlank": true,
"title": "Portainer container",
"url": "https://portainer.kaz.bzh/#!/${__field.labels.portainer_id}/docker/containers/${__field.labels.id.21}${__field.labels.id.22}${__field.labels.id.23}${__field.labels.id.24}${__field.labels.id.25}${__field.labels.id.26}${__field.labels.id.27}${__field.labels.id.28}${__field.labels.id.29}${__field.labels.id.30}${__field.labels.id.31}${__field.labels.id.32}"
}
],
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "bytes"
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 24,
"x": 0,
"y": 11
},
"id": 1,
"links": [
{
"targetBlank": true,
"title": "Portainer",
"url": "https://portainer.kaz.bzh"
}
],
"options": {
"legend": {
"calcs": [
"mean",
"lastNotNull"
],
"displayMode": "table",
"placement": "right",
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "multi",
"sort": "desc"
}
},
"pluginVersion": "11.6.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "builder",
"expr": "container_memory_usage_bytes{image!=\"\", host=\"$host\"}",
"hide": false,
"intervalFactor": 2,
"legendFormat": "{{container_label_com_docker_compose_project}}/{{name}}",
"metric": "container_memory_usage_bytes",
"range": true,
"refId": "A",
"step": 10
}
],
"title": "Memory Usage",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 2,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": true,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "Bps"
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 12,
"x": 0,
"y": 18
},
"id": 3,
"options": {
"legend": {
"calcs": [
"mean",
"lastNotNull"
],
"displayMode": "table",
"placement": "right",
"showLegend": true,
"sortBy": "Mean",
"sortDesc": true
},
"tooltip": {
"hideZeros": false,
"mode": "multi",
"sort": "desc"
}
},
"pluginVersion": "11.6.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "irate(container_network_receive_bytes_total{image!=\"\", host=\"$host\"}[5m])",
"intervalFactor": 2,
"legendFormat": "{{container_label_com_docker_compose_project}}/{{name}}",
"metric": "container_network_receive_bytes_total",
"range": true,
"refId": "A",
"step": 20
}
],
"title": "Network Rx",
"transformations": [
{
"id": "renameByRegex",
"options": {
"regex": "(.*)",
"renamePattern": "$1"
}
}
],
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 10,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 2,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "never",
"spanNulls": true,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "Bps"
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 12,
"x": 12,
"y": 18
},
"id": 9,
"options": {
"legend": {
"calcs": [
"mean",
"lastNotNull"
],
"displayMode": "table",
"placement": "right",
"showLegend": true,
"sortBy": "Mean",
"sortDesc": true
},
"tooltip": {
"hideZeros": false,
"mode": "multi",
"sort": "desc"
}
},
"pluginVersion": "11.6.0",
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "${DS_PROMETHEUS}"
},
"editorMode": "code",
"expr": "irate(container_network_transmit_bytes_total{image!=\"\", host=\"$host\"}[5m])",
"hide": false,
"intervalFactor": 2,
"legendFormat": "{{container_label_com_docker_compose_project}}/{{name}}",
"metric": "container_network_receive_bytes_total",
"range": true,
"refId": "B",
"step": 20
}
],
"title": "Network Tx",
"type": "timeseries"
}
],
"refresh": "30s",
"schemaVersion": 41,
"tags": [],
"templating": {
"list": [
{
"allowCustomValue": false,
"current": {},
"definition": "label_values(host)",
"includeAll": true,
"multi": true,
"name": "host",
"options": [],
"query": {
"qryType": 1,
"query": "label_values(host)",
"refId": "PrometheusVariableQueryEditor-VariableQuery"
},
"refresh": 1,
"regex": "",
"type": "query"
},
{
"baseFilters": [],
"datasource": {
"type": "prometheus",
"uid": "PBFA97CFB590B2093"
},
"filters": [
{
"condition": "",
"key": "container_label_com_docker_compose_project",
"keyLabel": "container_label_com_docker_compose_project",
"operator": "=~",
"value": ".*",
"valueLabels": [
".*"
]
}
],
"hide": 1,
"name": "filter",
"type": "adhoc"
}
]
},
"time": {
"from": "now-3h",
"to": "now"
},
"timepicker": {},
"timezone": "browser",
"title": "Docker monitoring par host",
"uid": "eekgch7tdq8sgc",
"version": 29,
"weekStart": ""
}

View File

@@ -0,0 +1,442 @@
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "Bps"
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 12,
"x": 0,
"y": 14
},
"id": 84,
"options": {
"legend": {
"calcs": [
"mean",
"lastNotNull",
"max",
"min"
],
"displayMode": "table",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "multi",
"sort": "none"
}
},
"pluginVersion": "11.6.0",
"targets": [
{
"datasource": {
"type": "prometheus"
},
"editorMode": "code",
"expr": "rate(node_network_receive_bytes_total{host=\"$host\", device=~\"$device\"}[5m])",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "{{device}} - rx",
"range": true,
"refId": "A",
"step": 240
},
{
"datasource": {
"type": "prometheus"
},
"editorMode": "code",
"expr": "- rate(node_network_transmit_bytes_total{host=\"$host\", device=~\"$device\"}[5m])",
"hide": false,
"instant": false,
"legendFormat": "{{device}} - tx",
"range": true,
"refId": "B"
}
],
"title": "Network Traffic Rx",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus"
},
"description": "",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"links": [],
"mappings": [],
"max": 100,
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "percent"
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 12,
"x": 12,
"y": 14
},
"id": 174,
"options": {
"alertThreshold": true,
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "single",
"sort": "none"
}
},
"pluginVersion": "11.6.0",
"targets": [
{
"datasource": {
"type": "prometheus"
},
"editorMode": "code",
"expr": "(node_filesystem_size_bytes{host=\"$host\",fstype=~\"ext.*|xfs\",mountpoint !~\".*pod.*\"}-node_filesystem_free_bytes{host=\"$host\",fstype=~\"ext.*|xfs\",mountpoint !~\".*pod.*\"}) *100/(node_filesystem_avail_bytes{host=\"$host\",fstype=~\"ext.*|xfs\",mountpoint !~\".*pod.*\"}+(node_filesystem_size_bytes{host=\"$host\",fstype=~\"ext.*|xfs\",mountpoint !~\".*pod.*\"}-node_filesystem_free_bytes{host=\"$host\",fstype=~\"ext.*|xfs\",mountpoint !~\".*pod.*\"}))",
"format": "time_series",
"instant": false,
"interval": "",
"intervalFactor": 1,
"legendFormat": "{{mountpoint}}",
"refId": "A"
},
{
"datasource": {
"type": "prometheus"
},
"expr": "node_filesystem_files_free{host=\"$host\",fstype=~\"ext.?|xfs\"} / node_filesystem_files{host=\"$host\",fstype=~\"ext.?|xfs\"}",
"hide": true,
"interval": "",
"legendFormat": "Inodes{{instance}}{{mountpoint}}",
"refId": "B"
}
],
"title": "Disk",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus"
},
"description": "Physical machines only",
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"min": 0,
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
},
"unit": "celsius"
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 12,
"x": 0,
"y": 21
},
"id": 175,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "single",
"sort": "none"
}
},
"pluginVersion": "11.6.0",
"targets": [
{
"editorMode": "code",
"expr": "node_thermal_zone_temp{host=\"$host\"}",
"legendFormat": "{{type}}-zone{{zone}}",
"range": true,
"refId": "A"
}
],
"title": "Temperature",
"type": "timeseries"
},
{
"datasource": {
"type": "prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"barWidthFactor": 0.6,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green"
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 12,
"x": 12,
"y": 21
},
"id": 176,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"hideZeros": false,
"mode": "single",
"sort": "none"
}
},
"pluginVersion": "11.6.0",
"targets": [
{
"editorMode": "code",
"expr": "rate(node_disk_reads_completed_total{host=\"$host\"}[2m])",
"legendFormat": "{{device}} reads",
"range": true,
"refId": "A"
},
{
"datasource": {
"type": "prometheus"
},
"editorMode": "code",
"expr": " rate(node_disk_writes_completed_total{host=~\"$host\"}[2m])",
"hide": false,
"instant": false,
"legendFormat": "{{device}} writes",
"range": true,
"refId": "B"
}
],
"title": "Disks IOs",
"type": "timeseries"
}
],
"preload": false,
"refresh": "5s",
"schemaVersion": 41,
"tags": [],
"templating": {
"list": [
{
"allowCustomValue": false,
"current": {
"text": "kazguel",
"value": "kazguel"
},
"definition": "label_values(host)",
"includeAll": false,
"name": "host",
"options": [],
"query": {
"qryType": 1,
"query": "label_values(host)",
"refId": "PrometheusVariableQueryEditor-VariableQuery"
},
"refresh": 1,
"regex": "",
"type": "query"
},
{
"allowCustomValue": false,
"current": {
"text": [
"ens18"
],
"value": [
"ens18"
]
},
"definition": "label_values(node_network_info{device!~\"br.*|veth.*|lo.*|tap.*|docker.*|vibr.*\"},device)",
"includeAll": true,
"label": "NIC",
"multi": true,
"name": "device",
"options": [],
"query": {
"qryType": 1,
"query": "label_values(node_network_info{device!~\"br.*|veth.*|lo.*|tap.*|docker.*|vibr.*\"},device)",
"refId": "PrometheusVariableQueryEditor-VariableQuery"
},
"refresh": 1,
"regex": "",
"type": "query"
}
]
},
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {},
"timezone": "",
"title": "Vue Serveur",
"uid": "deki6c3qvihhcd",
"version": 22
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,12 +1,108 @@
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_interval: 60s
evaluation_interval: 60s
scrape_timeout: 55s
rule_files:
- 'alert.rules'
scrape_configs:
- job_name: 'traefik'
scrape_interval: 5s
# unused for now
#- job_name: 'traefik'
# scrape_interval: 5s
# static_configs:
# - targets: ['reverse-proxy:8080']
- job_name: prometheus
static_configs:
- targets: ['dashboard.kaz.sns:8289','dashboard2.kaz.sns:8289']
- targets: ["prometheus:9090"]
- job_name: cadvisor-prod1
scheme: "https"
static_configs:
- targets: ["cadvisor-prod1.kaz.bzh:443"]
labels:
host: 'prod1'
portainer_id: 2
- job_name: cadvisor-prod2
scheme: "https"
static_configs:
- targets: ["cadvisor-prod2.kaz.bzh:443"]
labels:
host: 'prod2'
portainer_id: 4
- job_name: cadvisor-kazoulet
scheme: "https"
static_configs:
- targets: ["cadvisor-kazoulet.kaz.bzh:443"]
labels:
host: 'kazoulet'
portainer_id: 3
- job_name: cadvisor-tykaz
scheme: "https"
static_configs:
- targets: ["cadvisor-tykaz.kaz.bzh:443"]
labels:
host: 'tykaz'
portainer_id: 10
- job_name: cadvisor-kazguel
scheme: "https"
static_configs:
- targets: ["cadvisor-kazguel.kaz.bzh:443"]
labels:
host: 'kazguel'
portainer_id: 11
- job_name: cadvisor-kazkouil
scheme: "https"
static_configs:
- targets: ["cadvisor-dev.kazkouil.fr:443"]
labels:
host: 'kazkouil'
portainer_id: 5
- job_name: node-exporter-prod1
static_configs:
# - targets: ["prod1.kaz.bzh:9100","prod2.kaz.bzh:9100","kazoulet.kaz.bzh:9100","tykaz.kaz.bzh:9100","kazguel.kaz.bzh:9100","kazkouil.fr:9100"]
- targets: ["prod1.kaz.bzh:9100"]
labels:
host: 'prod1'
- job_name: node-exporter-prod2
static_configs:
# - targets: ["prod1.kaz.bzh:9100","prod2.kaz.bzh:9100","kazoulet.kaz.bzh:9100","tykaz.kaz.bzh:9100","kazguel.kaz.bzh:9100","kazkouil.fr:9100"]
- targets: ["prod2.kaz.bzh:9100"]
labels:
host: 'prod2'
- job_name: node-exporter-kazoulet
static_configs:
- targets: ["kazoulet.kaz.bzh:9100"]
labels:
host: 'kazoulet'
- job_name: node-exporter-tykaz
static_configs:
- targets: ["tykaz.kaz.bzh:9100"]
labels:
host: 'tykaz'
- job_name: node-exporter-kazguel
static_configs:
- targets: ["kazguel.kaz.bzh:9100"]
labels:
host: 'kazguel'
- job_name: node-exporter-kazkouil
static_configs:
- targets: ["kazkouil.fr:9100"]
labels:
host: 'kazkouil'

View File

@@ -5,7 +5,9 @@ NEWPASSWORD=$(base64 -d <<< $2)
OLDPASSWORD=$(base64 -d <<< $3)
URL_AGORA="https://${matterHost}.${domain}"
mattermost_token=${LDAPUI_MM_ADMIN_TOKEN}
#mattermost_token=${LDAPUI_MM_ADMIN_TOKEN}
. $KAZ_KEY_DIR/env-mattermostAdmin
IDUSER=$(curl -s -H "Authorization: Bearer ${mattermost_token}" "${URL_AGORA}/api/v4/users/email/${EMAIL}" | awk -F "," '{print $1}' | sed -e 's/{"id"://g' -e 's/"//g')
if [ ${IDUSER} == 'app.user.missing_account.const' ]

1
dockers/mastodon/.env Symbolic link
View File

@@ -0,0 +1 @@
../../config/dockers.env

View File

@@ -0,0 +1,6 @@
Initialiser la DB :
docker-compose run --rm web bundle exec rails db:setup
Créer un compte admin :
tootctl accounts create adminkaz --email admin@kaz.bzh --confirmed --role Owner
tootctl accounts approve adminkaz

View File

@@ -0,0 +1,184 @@
# This file is designed for production server deployment, not local development work
# For a containerized local dev environment, see: https://github.com/mastodon/mastodon/blob/main/docs/DEVELOPMENT.md#docker
services:
db:
container_name: ${mastodonDBName}
restart: ${restartPolicy}
image: postgres:14-alpine
shm_size: 256mb
networks:
- mastodonNet
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'postgres']
volumes:
- postgres:/var/lib/postgresql/data
# environment:
# - 'POSTGRES_HOST_AUTH_METHOD=trust'
env_file:
- ../../secret/env-mastodonDB
redis:
container_name: ${mastodonRedisName}
restart: ${restartPolicy}
image: redis:7-alpine
networks:
- mastodonNet
healthcheck:
test: ['CMD', 'redis-cli', 'ping']
volumes:
- redis:/data
# es:
# restart: always
# image: docker.elastic.co/elasticsearch/elasticsearch:7.17.4
# environment:
# - "ES_JAVA_OPTS=-Xms512m -Xmx512m -Des.enforce.bootstrap.checks=true"
# - "xpack.license.self_generated.type=basic"
# - "xpack.security.enabled=false"
# - "xpack.watcher.enabled=false"
# - "xpack.graph.enabled=false"
# - "xpack.ml.enabled=false"
# - "bootstrap.memory_lock=true"
# - "cluster.name=es-mastodon"
# - "discovery.type=single-node"
# - "thread_pool.write.queue_size=1000"
# networks:
# - external_network
# - internal_network
# healthcheck:
# test: ["CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1"]
# volumes:
# - ./elasticsearch:/usr/share/elasticsearch/data
# ulimits:
# memlock:
# soft: -1
# hard: -1
# nofile:
# soft: 65536
# hard: 65536
# ports:
# - '127.0.0.1:9200:9200'
web:
# You can uncomment the following line if you want to not use the prebuilt image, for example if you have local code changes
# build: .
container_name: ${mastodonServName}
image: ghcr.io/mastodon/mastodon:v4.3.6
restart: ${restartPolicy}
environment:
- LOCAL_DOMAIN=${mastodonHost}.${domain}
- SMTP_SERVER=smtp.${domain}
- SMTP_LOGIN=admin@${domain}
- SMTP_FROM_ADDRESS=admin@${domain}
env_file:
- env-config
- ../../secret/env-mastodonServ
- ../../secret/env-mastodonDB
command: bundle exec puma -C config/puma.rb
networks:
- mastodonNet
healthcheck:
# prettier-ignore
test: ['CMD-SHELL',"curl -s --noproxy localhost localhost:3000/health | grep -q 'OK' || exit 1"]
ports:
- '127.0.0.1:3000:3000'
depends_on:
- db
- redis
# - es
volumes:
- public_system:/mastodon/public/system
- images:/mastodon/app/javascript/images
labels:
- "traefik.enable=true"
- "traefik.http.routers.koz.rule=Host(`${mastodonHost}.${domain}`)"
- "traefik.http.services.koz.loadbalancer.server.port=3000"
- "traefik.docker.network=mastodonNet"
streaming:
# You can uncomment the following lines if you want to not use the prebuilt image, for example if you have local code changes
# build:
# dockerfile: ./streaming/Dockerfile
# context: .
container_name: ${mastodonStreamingName}
image: ghcr.io/mastodon/mastodon-streaming:v4.3.6
restart: ${restartPolicy}
environment:
- LOCAL_DOMAIN=${mastodonHost}.${domain}
- SMTP_SERVER=smtp.${domain}
- SMTP_LOGIN=admin@${domain}
- SMTP_FROM_ADDRESS=admin@${domain}
env_file:
- env-config
- ../../secret/env-mastodonServ
command: node ./streaming/index.js
networks:
- mastodonNet
healthcheck:
# prettier-ignore
test: ['CMD-SHELL', "curl -s --noproxy localhost localhost:4000/api/v1/streaming/health | grep -q 'OK' || exit 1"]
ports:
- '127.0.0.1:4000:4000'
depends_on:
- db
- redis
labels:
- "traefik.enable=true"
- "traefik.http.routers.kozs.rule=(Host(`${mastodonHost}.${domain}`) && PathPrefix(`/api/v1/streaming`))"
- "traefik.http.services.kozs.loadbalancer.server.port=4000"
- "traefik.docker.network=mastodonNet"
sidekiq:
# You can uncomment the following line if you want to not use the prebuilt image, for example if you have local code changes
# build: .
container_name: ${mastodonSidekiqName}
image: ghcr.io/mastodon/mastodon:v4.3.6
restart: ${restartPolicy}
environment:
- LOCAL_DOMAIN=${mastodonHost}.${domain}
- SMTP_SERVER=smtp.${domain}
- SMTP_LOGIN=admin@${domain}
- SMTP_FROM_ADDRESS=admin@${domain}
env_file:
- env-config
- ../../secret/env-mastodonServ
command: bundle exec sidekiq
depends_on:
- db
- redis
networks:
- mastodonNet
volumes:
- public_system:/mastodon/public/system
healthcheck:
test: ['CMD-SHELL', "ps aux | grep '[s]idekiq\ 6' || false"]
## Uncomment to enable federation with tor instances along with adding the following ENV variables
## http_hidden_proxy=http://privoxy:8118
## ALLOW_ACCESS_TO_HIDDEN_SERVICE=true
# tor:
# image: sirboops/tor
# networks:
# - external_network
# - internal_network
#
# privoxy:
# image: sirboops/privoxy
# volumes:
# - ./priv-config:/opt/config
# networks:
# - external_network
# - internal_network
volumes:
postgres:
redis:
public_system:
images:
networks:
mastodonNet:
external: true
name: mastodonNet

113
dockers/mastodon/env-config Normal file
View File

@@ -0,0 +1,113 @@
# This is a sample configuration file. You can generate your configuration
# with the `bundle exec rails mastodon:setup` interactive setup wizard, but to customize
# your setup even further, you'll need to edit it manually. This sample does
# not demonstrate all available configuration options. Please look at
# https://docs.joinmastodon.org/admin/config/ for the full documentation.
# Note that this file accepts slightly different syntax depending on whether
# you are using `docker-compose` or not. In particular, if you use
# `docker-compose`, the value of each declared variable will be taken verbatim,
# including surrounding quotes.
# See: https://github.com/mastodon/mastodon/issues/16895
# Federation
# ----------
# This identifies your server and cannot be changed safely later
# ----------
# LOCAL_DOMAIN=
# Redis
# -----
REDIS_HOST=redis
REDIS_PORT=
# PostgreSQL
# ----------
DB_HOST=db
#DB_USER=postgres
#DB_NAME=postgres
#DB_PASS=
DB_PORT=5432
# Elasticsearch (optional)
# ------------------------
ES_ENABLED=false
ES_HOST=localhost
ES_PORT=9200
# Authentication for ES (optional)
ES_USER=elastic
ES_PASS=password
# Secrets
# -------
# Make sure to use `bundle exec rails secret` to generate secrets
# -------
#SECRET_KEY_BASE=
#OTP_SECRET=
# Encryption secrets
# ------------------
# Must be available (and set to same values) for all server processes
# These are private/secret values, do not share outside hosting environment
# Use `bin/rails db:encryption:init` to generate fresh secrets
# Do NOT change these secrets once in use, as this would cause data loss and other issues
# ------------------
#ACTIVE_RECORD_ENCRYPTION_DETERMINISTIC_KEY=
#ACTIVE_RECORD_ENCRYPTION_KEY_DERIVATION_SALT=
#ACTIVE_RECORD_ENCRYPTION_PRIMARY_KEY=
# Web Push
# --------
# Generate with `bundle exec rails mastodon:webpush:generate_vapid_key`
# --------
#VAPID_PRIVATE_KEY=
#VAPID_PUBLIC_KEY=
# Sending mail
# ------------
#SMTP_SERVER=
SMTP_PORT=587
#SMTP_LOGIN=
#SMTP_PASSWORD=
#SMTP_FROM_ADDRESS=
# File storage (optional)
# -----------------------
S3_ENABLED=false
S3_BUCKET=files.example.com
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
S3_ALIAS_HOST=files.example.com
# IP and session retention
# -----------------------
# Make sure to modify the scheduling of ip_cleanup_scheduler in config/sidekiq.yml
# to be less than daily if you lower IP_RETENTION_PERIOD below two days (172800).
# -----------------------
IP_RETENTION_PERIOD=31556952
SESSION_RETENTION_PERIOD=31556952
# Fetch All Replies Behavior
# --------------------------
# When a user expands a post (DetailedStatus view), fetch all of its replies
# (default: false)
FETCH_REPLIES_ENABLED=false
# Period to wait between fetching replies (in minutes)
FETCH_REPLIES_COOLDOWN_MINUTES=15
# Period to wait after a post is first created before fetching its replies (in minutes)
FETCH_REPLIES_INITIAL_WAIT_MINUTES=5
# Max number of replies to fetch - total, recursively through a whole reply tree
FETCH_REPLIES_MAX_GLOBAL=1000
# Max number of replies to fetch - for a single post
FETCH_REPLIES_MAX_SINGLE=500
# Max number of replies Collection pages to fetch - total
FETCH_REPLIES_MAX_PAGES=500
SINGLE_USER_MODE=false
#EMAIL_DOMAIN_ALLOWLIST=

View File

@@ -1,7 +1,7 @@
services:
app:
image: mattermost/mattermost-team-edition:10.5
image: mattermost/mattermost-team-edition:10.11.1
container_name: ${mattermostServName}
restart: ${restartPolicy}
volumes:
@@ -39,12 +39,12 @@ services:
- "traefik.http.routers.${mattermostServName}.rule=Host(`${matterHost}.${domain}`)"
- "traefik.http.services.${mattermostServName}.loadbalancer.server.port=${matterPort}"
- "traefik.docker.network=mattermostNet"
healthcheck:
test: ["CMD", "curl", "-f", "http://app:${matterPort}"]
interval: 20s
retries: 10
start_period: 20s
timeout: 10s
# healthcheck:
# test: ["CMD", "curl", "-f", "http://app:${matterPort}"]
# interval: 20s
# retries: 10
# start_period: 20s
# timeout: 10s
postgres:
image: postgres:17-alpine

View File

@@ -6,8 +6,11 @@ setKazVars
cd $(dirname $0)
. "${DOCKERS_ENV}"
. "${KAZ_KEY_DIR}/SetAllPass.sh"
"${KAZ_BIN_DIR}/gestContainers.sh" --install -M -agora
docker exec ${mattermostServName} mmctl auth login https://${matterHost}.${domain} --name local-server --username ${mattermost_MM_ADMIN_USER} --password ${mattermost_MM_ADMIN_PASSWORD}
docker exec ${mattermostServName} mmctl channel create --team kaz --name "une-question--un-soucis" --display-name "Une question ? Un souci ?"
docker exec ${mattermostServName} mmctl channel create --team kaz --name "cafe-du-commerce--ouvert-2424h" --display-name "Café du commerce"
docker exec ${mattermostServName} mmctl channel create --team kaz --name "creation-comptes" --display-name "Création comptes"

View File

@@ -1,4 +1,4 @@
FROM paheko/paheko:1.3.12
FROM paheko/paheko:1.3.15
ENV PAHEKO_DIR /var/www/paheko
@@ -11,6 +11,9 @@ RUN mkdir ${PAHEKO_DIR}/users
RUN docker-php-ext-install calendar
RUN apt-get update
RUN apt-get install -y libwebp-dev
RUN docker-php-ext-configure gd --with-jpeg --with-freetype --with-webp
RUN docker-php-ext-install gd
#Plugin facturation (le seul qui ne fasse pas parti de la distribution de base
RUN apt-get install unzip

View File

@@ -127,4 +127,4 @@ define('Paheko\SHOW_ERRORS', true);
#add by fab le 21/04/2022
//const PDF_COMMAND = 'prince';
# const PDF_COMMAND = 'auto';
const PDF_COMMAND = 'chromium --no-sandbox --headless --disable-dev-shm-usage --autoplay-policy=no-user-gesture-required --no-first-run --disable-gpu --disable-features=DefaultPassthroughCommandDecoder --use-fake-ui-for-media-stream --use-fake-device-for-media-stream --disable-sync --print-to-pdf=%2$s %1$s';
const PDF_COMMAND = 'chromium --no-sandbox --headless --no-pdf-header-footer --disable-dev-shm-usage --autoplay-policy=no-user-gesture-required --no-first-run --disable-gpu --disable-features=DefaultPassthroughCommandDecoder --use-fake-ui-for-media-stream --use-fake-device-for-media-stream --disable-sync --print-to-pdf=%2$s %1$s';

Some files were not shown because too many files have changed in this diff Show More