first commit

This commit is contained in:
François 2024-06-03 18:43:35 +02:00
parent 2da01a3f6e
commit f501d519af
883 changed files with 71550 additions and 2 deletions

515
LICENSE Normal file
View File

@ -0,0 +1,515 @@
CeCILL-B FREE SOFTWARE LICENSE AGREEMENT
Notice
This Agreement is a Free Software license agreement that is the result
of discussions between its authors in order to ensure compliance with
the two main principles guiding its drafting:
* firstly, compliance with the principles governing the distribution
of Free Software: access to source code, broad rights granted to
users,
* secondly, the election of a governing law, French law, with which
it is conformant, both as regards the law of torts and
intellectual property law, and the protection that it offers to
both authors and holders of the economic rights over software.
The authors of the CeCILL-B (for Ce[a] C[nrs] I[nria] L[ogiciel] L[ibre])
license are:
Commissariat à l'Energie Atomique - CEA, a public scientific, technical
and industrial research establishment, having its principal place of
business at 25 rue Leblanc, immeuble Le Ponant D, 75015 Paris, France.
Centre National de la Recherche Scientifique - CNRS, a public scientific
and technological establishment, having its principal place of business
at 3 rue Michel-Ange, 75794 Paris cedex 16, France.
Institut National de Recherche en Informatique et en Automatique -
INRIA, a public scientific and technological establishment, having its
principal place of business at Domaine de Voluceau, Rocquencourt, BP
105, 78153 Le Chesnay cedex, France.
Preamble
This Agreement is an open source software license intended to give users
significant freedom to modify and redistribute the software licensed
hereunder.
The exercising of this freedom is conditional upon a strong obligation
of giving credits for everybody that distributes a software
incorporating a software ruled by the current license so as all
contributions to be properly identified and acknowledged.
In consideration of access to the source code and the rights to copy,
modify and redistribute granted by the license, users are provided only
with a limited warranty and the software's author, the holder of the
economic rights, and the successive licensors only have limited liability.
In this respect, the risks associated with loading, using, modifying
and/or developing or reproducing the software by the user are brought to
the user's attention, given its Free Software status, which may make it
complicated to use, with the result that its use is reserved for
developers and experienced professionals having in-depth computer
knowledge. Users are therefore encouraged to load and test the
suitability of the software as regards their requirements in conditions
enabling the security of their systems and/or data to be ensured and,
more generally, to use and operate it in the same conditions of
security. This Agreement may be freely reproduced and published,
provided it is not altered, and that no provisions are either added or
removed herefrom.
This Agreement may apply to any or all software for which the holder of
the economic rights decides to submit the use thereof to its provisions.
Article 1 - DEFINITIONS
For the purpose of this Agreement, when the following expressions
commence with a capital letter, they shall have the following meaning:
Agreement: means this license agreement, and its possible subsequent
versions and annexes.
Software: means the software in its Object Code and/or Source Code form
and, where applicable, its documentation, "as is" when the Licensee
accepts the Agreement.
Initial Software: means the Software in its Source Code and possibly its
Object Code form and, where applicable, its documentation, "as is" when
it is first distributed under the terms and conditions of the Agreement.
Modified Software: means the Software modified by at least one
Contribution.
Source Code: means all the Software's instructions and program lines to
which access is required so as to modify the Software.
Object Code: means the binary files originating from the compilation of
the Source Code.
Holder: means the holder(s) of the economic rights over the Initial
Software.
Licensee: means the Software user(s) having accepted the Agreement.
Contributor: means a Licensee having made at least one Contribution.
Licensor: means the Holder, or any other individual or legal entity, who
distributes the Software under the Agreement.
Contribution: means any or all modifications, corrections, translations,
adaptations and/or new functions integrated into the Software by any or
all Contributors, as well as any or all Internal Modules.
Module: means a set of sources files including their documentation that
enables supplementary functions or services in addition to those offered
by the Software.
External Module: means any or all Modules, not derived from the
Software, so that this Module and the Software run in separate address
spaces, with one calling the other when they are run.
Internal Module: means any or all Module, connected to the Software so
that they both execute in the same address space.
Parties: mean both the Licensee and the Licensor.
These expressions may be used both in singular and plural form.
Article 2 - PURPOSE
The purpose of the Agreement is the grant by the Licensor to the
Licensee of a non-exclusive, transferable and worldwide license for the
Software as set forth in Article 5 hereinafter for the whole term of the
protection granted by the rights over said Software.
Article 3 - ACCEPTANCE
3.1 The Licensee shall be deemed as having accepted the terms and
conditions of this Agreement upon the occurrence of the first of the
following events:
* (i) loading the Software by any or all means, notably, by
downloading from a remote server, or by loading from a physical
medium;
* (ii) the first time the Licensee exercises any of the rights
granted hereunder.
3.2 One copy of the Agreement, containing a notice relating to the
characteristics of the Software, to the limited warranty, and to the
fact that its use is restricted to experienced users has been provided
to the Licensee prior to its acceptance as set forth in Article 3.1
hereinabove, and the Licensee hereby acknowledges that it has read and
understood it.
Article 4 - EFFECTIVE DATE AND TERM
4.1 EFFECTIVE DATE
The Agreement shall become effective on the date when it is accepted by
the Licensee as set forth in Article 3.1.
4.2 TERM
The Agreement shall remain in force for the entire legal term of
protection of the economic rights over the Software.
Article 5 - SCOPE OF RIGHTS GRANTED
The Licensor hereby grants to the Licensee, who accepts, the following
rights over the Software for any or all use, and for the term of the
Agreement, on the basis of the terms and conditions set forth hereinafter.
Besides, if the Licensor owns or comes to own one or more patents
protecting all or part of the functions of the Software or of its
components, the Licensor undertakes not to enforce the rights granted by
these patents against successive Licensees using, exploiting or
modifying the Software. If these patents are transferred, the Licensor
undertakes to have the transferees subscribe to the obligations set
forth in this paragraph.
5.1 RIGHT OF USE
The Licensee is authorized to use the Software, without any limitation
as to its fields of application, with it being hereinafter specified
that this comprises:
1. permanent or temporary reproduction of all or part of the Software
by any or all means and in any or all form.
2. loading, displaying, running, or storing the Software on any or
all medium.
3. entitlement to observe, study or test its operation so as to
determine the ideas and principles behind any or all constituent
elements of said Software. This shall apply when the Licensee
carries out any or all loading, displaying, running, transmission
or storage operation as regards the Software, that it is entitled
to carry out hereunder.
5.2 ENTITLEMENT TO MAKE CONTRIBUTIONS
The right to make Contributions includes the right to translate, adapt,
arrange, or make any or all modifications to the Software, and the right
to reproduce the resulting software.
The Licensee is authorized to make any or all Contributions to the
Software provided that it includes an explicit notice that it is the
author of said Contribution and indicates the date of the creation thereof.
5.3 RIGHT OF DISTRIBUTION
In particular, the right of distribution includes the right to publish,
transmit and communicate the Software to the general public on any or
all medium, and by any or all means, and the right to market, either in
consideration of a fee, or free of charge, one or more copies of the
Software by any means.
The Licensee is further authorized to distribute copies of the modified
or unmodified Software to third parties according to the terms and
conditions set forth hereinafter.
5.3.1 DISTRIBUTION OF SOFTWARE WITHOUT MODIFICATION
The Licensee is authorized to distribute true copies of the Software in
Source Code or Object Code form, provided that said distribution
complies with all the provisions of the Agreement and is accompanied by:
1. a copy of the Agreement,
2. a notice relating to the limitation of both the Licensor's
warranty and liability as set forth in Articles 8 and 9,
and that, in the event that only the Object Code of the Software is
redistributed, the Licensee allows effective access to the full Source
Code of the Software at a minimum during the entire period of its
distribution of the Software, it being understood that the additional
cost of acquiring the Source Code shall not exceed the cost of
transferring the data.
5.3.2 DISTRIBUTION OF MODIFIED SOFTWARE
If the Licensee makes any Contribution to the Software, the resulting
Modified Software may be distributed under a license agreement other
than this Agreement subject to compliance with the provisions of Article
5.3.4.
5.3.3 DISTRIBUTION OF EXTERNAL MODULES
When the Licensee has developed an External Module, the terms and
conditions of this Agreement do not apply to said External Module, that
may be distributed under a separate license agreement.
5.3.4 CREDITS
Any Licensee who may distribute a Modified Software hereby expressly
agrees to:
1. indicate in the related documentation that it is based on the
Software licensed hereunder, and reproduce the intellectual
property notice for the Software,
2. ensure that written indications of the Software intended use,
intellectual property notice and license hereunder are included in
easily accessible format from the Modified Software interface,
3. mention, on a freely accessible website describing the Modified
Software, at least throughout the distribution term thereof, that
it is based on the Software licensed hereunder, and reproduce the
Software intellectual property notice,
4. where it is distributed to a third party that may distribute a
Modified Software without having to make its source code
available, make its best efforts to ensure that said third party
agrees to comply with the obligations set forth in this Article .
If the Software, whether or not modified, is distributed with an
External Module designed for use in connection with the Software, the
Licensee shall submit said External Module to the foregoing obligations.
5.3.5 COMPATIBILITY WITH THE CeCILL AND CeCILL-C LICENSES
Where a Modified Software contains a Contribution subject to the CeCILL
license, the provisions set forth in Article 5.3.4 shall be optional.
A Modified Software may be distributed under the CeCILL-C license. In
such a case the provisions set forth in Article 5.3.4 shall be optional.
Article 6 - INTELLECTUAL PROPERTY
6.1 OVER THE INITIAL SOFTWARE
The Holder owns the economic rights over the Initial Software. Any or
all use of the Initial Software is subject to compliance with the terms
and conditions under which the Holder has elected to distribute its work
and no one shall be entitled to modify the terms and conditions for the
distribution of said Initial Software.
The Holder undertakes that the Initial Software will remain ruled at
least by this Agreement, for the duration set forth in Article 4.2.
6.2 OVER THE CONTRIBUTIONS
The Licensee who develops a Contribution is the owner of the
intellectual property rights over this Contribution as defined by
applicable law.
6.3 OVER THE EXTERNAL MODULES
The Licensee who develops an External Module is the owner of the
intellectual property rights over this External Module as defined by
applicable law and is free to choose the type of agreement that shall
govern its distribution.
6.4 JOINT PROVISIONS
The Licensee expressly undertakes:
1. not to remove, or modify, in any manner, the intellectual property
notices attached to the Software;
2. to reproduce said notices, in an identical manner, in the copies
of the Software modified or not.
The Licensee undertakes not to directly or indirectly infringe the
intellectual property rights of the Holder and/or Contributors on the
Software and to take, where applicable, vis-à-vis its staff, any and all
measures required to ensure respect of said intellectual property rights
of the Holder and/or Contributors.
Article 7 - RELATED SERVICES
7.1 Under no circumstances shall the Agreement oblige the Licensor to
provide technical assistance or maintenance services for the Software.
However, the Licensor is entitled to offer this type of services. The
terms and conditions of such technical assistance, and/or such
maintenance, shall be set forth in a separate instrument. Only the
Licensor offering said maintenance and/or technical assistance services
shall incur liability therefor.
7.2 Similarly, any Licensor is entitled to offer to its licensees, under
its sole responsibility, a warranty, that shall only be binding upon
itself, for the redistribution of the Software and/or the Modified
Software, under terms and conditions that it is free to decide. Said
warranty, and the financial terms and conditions of its application,
shall be subject of a separate instrument executed between the Licensor
and the Licensee.
Article 8 - LIABILITY
8.1 Subject to the provisions of Article 8.2, the Licensee shall be
entitled to claim compensation for any direct loss it may have suffered
from the Software as a result of a fault on the part of the relevant
Licensor, subject to providing evidence thereof.
8.2 The Licensor's liability is limited to the commitments made under
this Agreement and shall not be incurred as a result of in particular:
(i) loss due the Licensee's total or partial failure to fulfill its
obligations, (ii) direct or consequential loss that is suffered by the
Licensee due to the use or performance of the Software, and (iii) more
generally, any consequential loss. In particular the Parties expressly
agree that any or all pecuniary or business loss (i.e. loss of data,
loss of profits, operating loss, loss of customers or orders,
opportunity cost, any disturbance to business activities) or any or all
legal proceedings instituted against the Licensee by a third party,
shall constitute consequential loss and shall not provide entitlement to
any or all compensation from the Licensor.
Article 9 - WARRANTY
9.1 The Licensee acknowledges that the scientific and technical
state-of-the-art when the Software was distributed did not enable all
possible uses to be tested and verified, nor for the presence of
possible defects to be detected. In this respect, the Licensee's
attention has been drawn to the risks associated with loading, using,
modifying and/or developing and reproducing the Software which are
reserved for experienced users.
The Licensee shall be responsible for verifying, by any or all means,
the suitability of the product for its requirements, its good working
order, and for ensuring that it shall not cause damage to either persons
or properties.
9.2 The Licensor hereby represents, in good faith, that it is entitled
to grant all the rights over the Software (including in particular the
rights set forth in Article 5).
9.3 The Licensee acknowledges that the Software is supplied "as is" by
the Licensor without any other express or tacit warranty, other than
that provided for in Article 9.2 and, in particular, without any warranty
as to its commercial value, its secured, safe, innovative or relevant
nature.
Specifically, the Licensor does not warrant that the Software is free
from any error, that it will operate without interruption, that it will
be compatible with the Licensee's own equipment and software
configuration, nor that it will meet the Licensee's requirements.
9.4 The Licensor does not either expressly or tacitly warrant that the
Software does not infringe any third party intellectual property right
relating to a patent, software or any other property right. Therefore,
the Licensor disclaims any and all liability towards the Licensee
arising out of any or all proceedings for infringement that may be
instituted in respect of the use, modification and redistribution of the
Software. Nevertheless, should such proceedings be instituted against
the Licensee, the Licensor shall provide it with technical and legal
assistance for its defense. Such technical and legal assistance shall be
decided on a case-by-case basis between the relevant Licensor and the
Licensee pursuant to a memorandum of understanding. The Licensor
disclaims any and all liability as regards the Licensee's use of the
name of the Software. No warranty is given as regards the existence of
prior rights over the name of the Software or as regards the existence
of a trademark.
Article 10 - TERMINATION
10.1 In the event of a breach by the Licensee of its obligations
hereunder, the Licensor may automatically terminate this Agreement
thirty (30) days after notice has been sent to the Licensee and has
remained ineffective.
10.2 A Licensee whose Agreement is terminated shall no longer be
authorized to use, modify or distribute the Software. However, any
licenses that it may have granted prior to termination of the Agreement
shall remain valid subject to their having been granted in compliance
with the terms and conditions hereof.
Article 11 - MISCELLANEOUS
11.1 EXCUSABLE EVENTS
Neither Party shall be liable for any or all delay, or failure to
perform the Agreement, that may be attributable to an event of force
majeure, an act of God or an outside cause, such as defective
functioning or interruptions of the electricity or telecommunications
networks, network paralysis following a virus attack, intervention by
government authorities, natural disasters, water damage, earthquakes,
fire, explosions, strikes and labor unrest, war, etc.
11.2 Any failure by either Party, on one or more occasions, to invoke
one or more of the provisions hereof, shall under no circumstances be
interpreted as being a waiver by the interested Party of its right to
invoke said provision(s) subsequently.
11.3 The Agreement cancels and replaces any or all previous agreements,
whether written or oral, between the Parties and having the same
purpose, and constitutes the entirety of the agreement between said
Parties concerning said purpose. No supplement or modification to the
terms and conditions hereof shall be effective as between the Parties
unless it is made in writing and signed by their duly authorized
representatives.
11.4 In the event that one or more of the provisions hereof were to
conflict with a current or future applicable act or legislative text,
said act or legislative text shall prevail, and the Parties shall make
the necessary amendments so as to comply with said act or legislative
text. All other provisions shall remain effective. Similarly, invalidity
of a provision of the Agreement, for any reason whatsoever, shall not
cause the Agreement as a whole to be invalid.
11.5 LANGUAGE
The Agreement is drafted in both French and English and both versions
are deemed authentic.
Article 12 - NEW VERSIONS OF THE AGREEMENT
12.1 Any person is authorized to duplicate and distribute copies of this
Agreement.
12.2 So as to ensure coherence, the wording of this Agreement is
protected and may only be modified by the authors of the License, who
reserve the right to periodically publish updates or new versions of the
Agreement, each with a separate number. These subsequent versions may
address new issues encountered by Free Software.
12.3 Any Software distributed under a given version of the Agreement may
only be subsequently distributed under the same version of the Agreement
or a subsequent version.
Article 13 - GOVERNING LAW AND JURISDICTION
13.1 The Agreement is governed by French law. The Parties agree to
endeavor to seek an amicable solution to any disagreements or disputes
that may arise during the performance of the Agreement.
13.2 Failing an amicable solution within two (2) months as from their
occurrence, and unless emergency proceedings are necessary, the
disagreements or disputes shall be referred to the Paris Courts having
jurisdiction, by the more diligent Party.
Version 1.0 dated 2006-09-05.

View File

@ -1,3 +1,41 @@
# KazV2
# kaz
Ensemble des services de KAZ
[Kaz](https://kaz.bzh/) est un CHATONS du Morbihan. Nous proposons ici un moyen de le répliquer dans d'autres lieux. Il y a des éléments de configuration à définir avant d'initialiser ce simulateur.
Il est possible de simuler notre CHATONS dans une VirtualBox pour mettre au point nos différents services (voir [kaz-vagrant](https://git.kaz.bzh/KAZ/kaz-vagrant)).
## Pré-requis
Dans la suite, on imagine que vous disposer d'une machine sous Linux (par exemple dernière version de [Debian](https://fr.wikipedia.org/wiki/Debian)) avec un minimum de paquets.
```bash
DEBIAN_FRONTEND=noninteractive apt-get -y upgrade
DEBIAN_FRONTEND=noninteractive apt-get -y dist-upgrade
DEBIAN_FRONTEND=noninteractive apt-get install -y apg curl git unzip rsync net-tools libnss3-tools
DEBIAN_FRONTEND=noninteractive apt-get remove --purge -y exim4-base exim4-config exim4-daemon-light
```
## Installation
* Télécharger le dépôt kaz-vagrant ou utilisez la commande :
```bash
git clone https://git.kaz.bzh/KAZ/kaz.git # pour essayer
git clone git+ssh://git@git.kaz.bzh:2202/KAZ/kaz.git # pour contribuer
cd kaz/
```
* Personalisez votre simulateur avec la commande (au besoin ajustez la mémoire et les cpus utilisés dans Vagrantfile) :
```bash
./bin/init.sh
```
Pour terminer la création et tous les dockers, il faut lancer la commande
```bash
./bin/install.sh web jirafeau ethercalc etherpad postfix roundcube proxy framadate paheko dokuwiki > >(tee stdout.log) 2> >(tee stderr.log >&2)
```
* Une fois la commande de création de l'univers, il faut patienter...
* Après il faut patienter encore un peu si on a pas la fibre ...
## Utilisation

View File

@ -0,0 +1,30 @@
#/usr/bin/env bash
_applyTemplate_completions () {
declare -a options
OPTIONS=("-help" "-timestamp")
COMPREPLY=()
local CUR OPTIONS_COUNT=0
CUR=${COMP_WORDS[COMP_CWORD]}
# compte les options déjà utilisé et les retire de la liste des possible
for ITEM in ${COMP_WORDS[@]:1}
do
if [[ " ${OPTIONS[*]} " =~ " ${ITEM} " ]] ; then
let "OPTIONS_COUNT++"
OPTIONS=(${OPTIONS[@]/${ITEM}})
else
break
fi
done
# si la position est dans des options "ou juste après"
((COMP_CWORD <= OPTIONS_COUNT+1)) && [[ "${CUR}" =~ ^- ]] && COMPREPLY=( $(compgen -W "${OPTIONS[*]}" -- "${CUR}" ) ) && return 0
# si la position est après des options ou que l'on ne commence pas par "-" on cherche des fichiers
((COMP_CWORD <= OPTIONS_COUNT+2)) && COMPREPLY=($(compgen -f -- "${CUR}")) && return 0
return 0
}
complete -F _applyTemplate_completions applyTemplate.sh

382
bin/.commonFunctions.sh Executable file
View File

@ -0,0 +1,382 @@
# commun fonctions for KAZ
#TODO; toutes les fonctions ci-dessous devraient être commentées
#KI : françois
#KOI : tout un tas de trucs utiles pour la gestion de l'infra kaz (à mettre dans chaque script)
#KAN :
# maj le 27/01/2024 by FAB: recherche de tous les srv kaz dispo (via le DNS)
# maj le 15/04/2024 by FAB: correction getPahekoOrgaList
# https://wiki.bash-hackers.org/scripting/terminalcodes
BOLD=''
RED=''
GREEN=''
YELLOW=''
BLUE=''
MAGENTA=''
CYAN=''
NC='' # No Color
NL='
'
########################################
setKazVars () {
# KAZ_ROOT must be set
if [ -z "${KAZ_ROOT}" ]; then
printKazError "\n\n *** KAZ_ROOT not defined! ***\n"
exit
fi
export KAZ_KEY_DIR="${KAZ_ROOT}/secret"
export KAZ_BIN_DIR="${KAZ_ROOT}/bin"
export KAZ_CONF_DIR="${KAZ_ROOT}/config"
export KAZ_CONF_PROXY_DIR="${KAZ_CONF_DIR}/proxy"
export KAZ_COMP_DIR="${KAZ_ROOT}/dockers"
export KAZ_STATE_DIR="${KAZ_ROOT}/state"
export KAZ_GIT_DIR="${KAZ_ROOT}/git"
export KAZ_DNLD_DIR="${KAZ_ROOT}/download"
export KAZ_DNLD_PAHEKO_DIR="${KAZ_DNLD_DIR}/paheko"
export APPLY_TMPL=${KAZ_BIN_DIR}/applyTemplate.sh
export DOCKERS_ENV="${KAZ_CONF_DIR}/dockers.env"
export DOCK_LIB="/var/lib/docker"
export DOCK_VOL="${DOCK_LIB}/volumes"
export DOCK_VOL_PAHEKO_ORGA="${DOCK_LIB}/volumes/paheko_assoUsers/_data/"
export NAS_VOL="/mnt/disk-nas1/docker/volumes/"
}
########################################
printKazMsg () {
# $1 msg
echo -e "${CYAN}${BOLD}$1${NC}"
}
printKazError () {
# $1 msb
echo -e "${RED}${BOLD}$1${NC}"
}
########################################
checkContinue () {
local rep
while : ; do
read -p "Do you want to continue? [yes]" rep
case "${rep}" in
""|[yYoO]* )
break
;;
[Nn]* )
exit
;;
* )
echo "Please answer yes or no."
;;
esac
done
}
checkDockerRunning () {
# $1 docker name
# $2 service name
if ! [[ "$(docker ps -f "name=$1" | grep -w "$1")" ]]; then
printKazError "$2 not running... abort"
return 1
fi
return 0
}
########################################
testValidIp () {
# $1 ip
local ip=$1
local stat=1
if [[ $ip =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
OIFS=$IFS
IFS='.'
ip=($ip)
IFS=$OIFS
[[ ${ip[0]} -le 255 && ${ip[1]} -le 255 && ${ip[2]} -le 255 && ${ip[3]} -le 255 ]]
stat=$?
fi
return $stat
}
########################################
getValInFile () {
# $1 filename
# $2 varname
grep "^\s*$2\s*=" $1 2>/dev/null | head -1 | sed "s%^\s*$2\s*=\(.*\)$%\1%"
}
getList () {
# $1 filename
(cat "$1"|sort; echo) | sed -e "s/\(.*\)[ \t]*#.*$/\1/" -e "s/^[ \t]*\(.*\)$/\1/" -e "/^$/d"
}
getPahekoPluginList () {
ls "${KAZ_DNLD_PAHEKO_DIR}" | grep -v "paheko-"
}
getPahekoOrgaList () {
# ls "${DOCK_VOL_PAHEKO_ORGA}"
find ${DOCK_VOL_PAHEKO_ORGA} -mindepth 1 -maxdepth 1 -type d -printf '%f\n' | sort
}
getAvailableComposes () {
ls "${KAZ_COMP_DIR}" | grep -v -- "^.*-orga$"
}
getAvailableOrgas () {
#KI : Fab
#KOI : donne la liste de toutes les orgas pour un serveur donné sinon serveur courant
#KAN : 27/01/2024
#en entrée
SITE_DST="$1"
if [ -n "${SITE_DST}" ];then
ssh -p 2201 root@${SITE_DST}.${domain} "ls \"${KAZ_COMP_DIR}\" | grep -- \"^.*-orga$\""
else
ls "${KAZ_COMP_DIR}" | grep -- "^.*-orga$"
fi
}
getAvailableServices () {
local service
for service in paheko cloud collabora agora wiki wp; do
echo "${service}"
done
}
########################################
filterInList () {
# $* ref list filter
# stdin candidats
local compose
while read compose ; do
if [[ " $* " =~ " ${compose} " ]]; then
echo ${compose}
fi
done | sort -u
}
filterNotInList () {
# $* ref list filter
# stdin candidats
local compose
while read compose ; do
if [[ ! " $* " =~ " ${compose} " ]]; then
echo ${compose}
fi
done | sort -u
}
filterAvailableComposes () {
# $* candidats
local AVAILABLE_COMPOSES=$(getAvailableComposes;getAvailableOrgas)
if [ $# -eq 0 ] ; then
echo ${AVAILABLE_COMPOSES}
fi
local compose
for compose in $*
do
compose=${compose%/}
if [[ ! "${NL}${AVAILABLE_COMPOSES}${NL}" =~ "${NL}${compose}${NL}" ]]; then
local subst=""
for item in ${AVAILABLE_COMPOSES}; do
[[ "${item}" =~ "${compose}" ]] && echo ${item} && subst="${subst} ${item}"
done
if [ -z "${subst}" ] ; then
echo "${RED}${BOLD}Unknown compose: ${compose} not in "${AVAILABLE_COMPOSES}"${NC}" >&2
#exit 1
else
echo "${BLUE}${BOLD}substitute compose: ${compose} => "${subst}"${NC}" >&2
fi
else
echo "${compose}"
fi
done | sort -u
}
########################################
serviceOnInOrga () {
# $1 orga name
# $2 service name
# default value
local composeFile="${KAZ_COMP_DIR}/$1-orga/docker-compose.yml"
if [[ ! -f "${composeFile}" ]]
then
echo "$3"
else
grep -q "$2" "${composeFile}" 2>/dev/null && echo on || echo off
fi
}
########################################
waitUrl () {
# $1 URL to waitfor
# $2 timeout en secondes (optional)
starttime=$(date +%s)
if [[ $(curl --connect-timeout 2 -s -D - "$1" -o /dev/null 2>/dev/null | head -n1) != *[23]0[0-9]* ]]; then
printKazMsg "service not available ($1). Please wait..."
echo curl --connect-timeout 2 -s -D - "$1" -o /dev/null \| head -n1
while [[ $(curl --connect-timeout 2 -s -D - "$1" -o /dev/null 2>/dev/null | head -n1) != *[23]0[0-9]* ]]
do
sleep 5
if [ $# -gt 1 ]; then
actualtime=$(date +%s)
delta=$(($actualtime-$starttime))
[[ $2 -lt $delta ]] && return 1
fi
done
fi
return 0
}
########################################
waitContainerHealthy () {
# $1 ContainerName
# $2 timeout en secondes (optional)
healthy="false"
starttime=$(date +%s)
running="false"
[[ $(docker ps -f name="$1" | grep -w "$1") ]] && running="true"
[[ $running == "true" && $(docker inspect -f {{.State.Health.Status}} "$1") == "healthy" ]] && healthy="true"
if [[ ! $running == "true" || ! $healthy == "true" ]]; then
printKazMsg "Docker not healthy ($1). Please wait..."
while [[ ! $running == "true" || ! $healthy == "true" ]]
do
sleep 5
if [ $# -gt 1 ]; then
actualtime=$(date +%s)
delta=$(($actualtime-$starttime))
[[ $2 -lt $delta ]] && printKazMsg "Docker not healthy ($1)... abort..." && return 1
fi
[[ ! $running == "true" ]] && [[ $(docker ps -f name="$1" | grep -w "$1") ]] && running="true"
[[ $running == "true" && $(docker inspect -f {{.State.Health.Status}} "$1") == "healthy" ]] && healthy="true"
done
fi
return 0
}
########################################
waitContainerRunning () {
# $1 ContainerName
# $2 timeout en secondes (optional)
starttime=$(date +%s)
running="false"
[[ $(docker ps -f name="$1" | grep -w "$1") ]] && running="true"
if [[ ! $running == "true" ]]; then
printKazMsg "Docker not running ($1). Please wait..."
while [[ ! $running == "true" ]]
do
sleep 5
if [ $# -gt 1 ]; then
actualtime=$(date +%s)
delta=$(($actualtime-$starttime))
[[ $2 -lt $delta ]] && printKazMsg "Docker did not start ($1)... abort..." && return 1
fi
[[ ! $running == "true" ]] && [[ $(docker ps -f name="$1" | grep -w "$1") ]] && running="true"
done
fi
return 0
}
########################################
downloadFile () {
# $1 URL to download
# $2 new filename (optional)
if [ $# -lt 1 ] || [ $# -gt 2 ]; then
printKazError "downloadFile: bad arg number"
return
fi
URL=$1
if [ -z "$2" ]; then
FILENAME="$(basename $1)"
else
FILENAME="$2"
fi
if [ ! -f "${FILENAME}" ]; then
printKazMsg " - load ${URL}"
curl -L -o "${FILENAME}" "${URL}"
else
TMP="${FILENAME}.tmp"
rm -f "${TMP}"
curl -L -o "${TMP}" "${URL}"
if ! cmp -s "${TMP}" "${FILENAME}" 2>/dev/null; then
mv "${TMP}" "${FILENAME}"
else
rm -f "${TMP}"
fi
fi
}
unzipInDir () {
# $1 zipfile
# $2 destDir
if [ $# -ne 2 ]; then
printKazError "unzipInDir: bad arg number"
return
fi
if ! [[ $1 == *.zip ]]; then
printKazError "unzipInDir: $1 is not a zip file"
return
fi
if ! [[ -d $2 ]]; then
printKazError "$2 is not destination dir"
return
fi
destName="$2/$(basename "${1%.zip}")"
if [[ -d "${destName}" ]]; then
printKazError "${destName} already exist"
return
fi
tmpDir=$2/tmp-$$
trap 'rm -rf "${tmpDir}"' EXIT
unzip "$1" -d "${tmpDir}"
srcDir=$(ls -1 "${tmpDir}")
case $(wc -l <<< $srcDir) in
0)
printKazError "empty zip file : $1"
rmdir "${tmpDir}"
return
;;
1)
mv "${tmpDir}/${srcDir}" "${destName}"
rmdir "${tmpDir}"
;;
*)
printKazError "zip file $1 is not a tree (${srcDir})"
return
;;
esac
}
########################################
get_Serveurs_Kaz () {
#KI : Fab
#KOI : donne la liste de tous les serveurs kaz sous le format srv1;srv2;srv3;.... en intérogeant le DNS
#KAN : 27/01/2024
liste=`dig -t TXT srv.kaz.bzh +short`
#on nettoie
liste=$(echo "$liste" | sed 's/\;/ /g')
liste=$(echo "$liste" | sed 's/\"//g')
#renvoi srv1 srv2 srv3 ....
echo ${liste}
}
########################################

66
bin/.container-completion.bash Executable file
View File

@ -0,0 +1,66 @@
#/usr/bin/env bash
_container_completions () {
KAZ_ROOT=$(cd "$(dirname ${COMP_WORDS[0]})"/..; pwd)
COMPREPLY=()
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
local cword=${COMP_CWORD} cur=${COMP_WORDS[COMP_CWORD]} card=${#COMP_WORDS[@]} i w skip=0
for ((i=1 ; i<cword; i++)) ; do
w="${COMP_WORDS[i]}"
[[ "${w}" == -* ]] && ((skip++))
done
local arg_pos w i cmd= names=
((arg_pos = cword - skip))
for ((i=1 ; i<card; i++)) ; do
w="${COMP_WORDS[i]}"
if [ -z "${cmd}" ] ; then
[[ "${w}" == -* ]] || cmd="${w}"
continue
fi
names="${names} ${w}"
done
case "$cur" in
-*)
COMPREPLY=( $(compgen -W "-h -n" -- "${cur}" ) ) ;;
*)
local cmd_available="status start stop save"
case "${arg_pos}" in
1)
# $1 of container.sh
COMPREPLY=($(compgen -W "${cmd_available}" -- "${cur}"))
;;
*)
# $2-* of container.sh
[[ " ${cmd_available} " =~ " ${cmd} " ]] || return 0
# select set of names
local names_set="available"
case "${cmd}" in
status)
names_set="available"
;;
start)
names_set="disable"
;;
stop)
names_set="enable"
;;
save)
names_set="validate"
;;
esac
local available_args=$("${KAZ_ROOT}/bin/kazList.sh" "compose" "${names_set}")
# remove previous selected target
local proposal item
for item in ${available_args} ; do
[[ " ${names} " =~ " ${item} " ]] || proposal="${proposal} ${item}"
done
COMPREPLY=($(compgen -W "${proposal}" -- "${cur}"))
;;
esac
esac
return 0
}
complete -F _container_completions container.sh

19
bin/.dns-completion.bash Executable file
View File

@ -0,0 +1,19 @@
#/usr/bin/env bash
_dns_completions () {
local cur find
COMPREPLY=()
cur=${COMP_WORDS[COMP_CWORD]}
case "$cur" in
-*)
COMPREPLY=( $(compgen -W "-h -n -f" -- "${cur}" ) ) ;;
*)
find=""
for arg in ${COMP_WORDS[@]} ; do
[[ " list add del " =~ " ${arg} " ]] && find="arg"
done
[ -z "${find}" ] && COMPREPLY=($(compgen -W "init list add del" -- "${cur}")) ;;
esac
return 0
}
complete -F _dns_completions dns.sh

View File

@ -0,0 +1,79 @@
#/usr/bin/env bash
_foreign_domain_completions () {
KAZ_ROOT=$(cd "$(dirname ${COMP_WORDS[0]})"/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
COMPREPLY=()
local cword=${COMP_CWORD} cur=${COMP_WORDS[COMP_CWORD]} card=${#COMP_WORDS[@]} i w skip=0
for ((i=1 ; i<cword; i++)) ; do
w="${COMP_WORDS[i]}"
[[ "${w}" == -* ]] && ((skip++))
done
local arg_pos w i cmd= opt1= opt2=
((arg_pos = cword - skip))
for ((i=1 ; i<card; i++)) ; do
w="${COMP_WORDS[i]}"
if [ -z "${cmd}" ] ; then
[[ "${w}" == -* ]] || cmd="${w}"
continue
fi
if [ -z "${opt1}" ] ; then
[[ "${w}" == -* ]] || opt1="${w}"
continue
fi
if [ -z "${opt2}" ] ; then
[[ "${w}" == -* ]] || opt2="${w}"
break
fi
done
case "$cur" in
-*)
COMPREPLY=( $(compgen -W "-h -n" -- "${cur}" ) ) ;;
*)
local cmd_available="list add del"
if [ "${arg_pos}" == 1 ]; then
# $1 of foreign-domain.sh .sh
COMPREPLY=($(compgen -W "${cmd_available}" -- "${cur}"))
else
. "${KAZ_CONF_DIR}/dockers.env"
case "${cmd}" in
"list")
;;
"add")
case "${arg_pos}" in
2)
declare -a availableOrga
availableOrga=($(sed -e "s/\(.*\)[ \t]*#.*$/\1/" -e "s/^[ \t]*\(.*\)-orga$/\1/" -e "/^$/d" "${KAZ_CONF_DIR}/container-orga.list"))
COMPREPLY=($(compgen -W "${availableOrga[*]}" -- "${cur}"))
;;
3)
local availableComposes=$(${KAZ_COMP_DIR}/${opt1}-orga/orga-gen.sh -l)
COMPREPLY=($(compgen -W "${availableComposes[*]}" -- "${cur}"))
;;
esac
;;
"del")
case "${arg_pos}" in
1)
;;
*)
local availableComposes=$(${KAZ_BIN_DIR}/kazList.sh service validate|sed -e "s/\bcollabora\b//" -e "s/ / /")
declare -a availableDomaine
availableDomaine=($(for compose in ${availableComposes[@]} ; do
sed -e "s/[ \t]*\([^#]*\)#.*/\1/g" -e "/^$/d" -e "s/.*server_name[ \t]\([^ ;]*\).*/\1/" "${KAZ_CONF_PROXY_DIR}/${compose}_kaz_name"
done))
COMPREPLY=($(compgen -W "${availableDomaine[*]}" -- "${cur}"))
;;
esac
;;
esac
fi
;;
esac
return 0
}
complete -F _foreign_domain_completions foreign-domain.sh

View File

@ -0,0 +1,22 @@
#/usr/bin/env bash
_gestContainers_completion () {
KAZ_ROOT=$(cd "$(dirname ${COMP_WORDS[0]})"/..; pwd)
COMPREPLY=()
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
local cword=${COMP_CWORD} cur=${COMP_WORDS[COMP_CWORD]}
case "$cur" in
-*)
local proposal="-h --help -n --simu -q --quiet -m --main -M --nas --local -v --version -l --list -cloud -agora -wp -wiki -office -I --install -r -t -exec --optim -occ -u -i -a -U--upgrade -p --post -mmctl"
COMPREPLY=( $(compgen -W "${proposal}" -- "${cur}" ) )
;;
*)
# orga name
local available_orga=$("${KAZ_BIN_DIR}/kazList.sh" "service" "enable" 2>/dev/null)
COMPREPLY=($(compgen -W "${available_orga}" -- "${cur}"))
;;
esac
return 0
}
complete -F _gestContainers_completion gestContainers.sh

View File

@ -0,0 +1,51 @@
#/usr/bin/env bash
_kazDockerNet_completion () {
KAZ_ROOT=$(cd "$(dirname ${COMP_WORDS[0]})"/..; pwd)
COMPREPLY=()
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
local cword=${COMP_CWORD} cur=${COMP_WORDS[COMP_CWORD]} card=${#COMP_WORDS[@]} i w skip=0
for ((i=1 ; i<cword; i++)) ; do
w="${COMP_WORDS[i]}"
[[ "${w}" == -* ]] && ((skip++))
done
local arg_pos w i cmd= names=
((arg_pos = cword - skip))
for ((i=1 ; i<card; i++)) ; do
w="${COMP_WORDS[i]}"
if [ -z "${cmd}" ] ; then
[[ "${w}" == -* ]] || cmd="${w}"
continue
fi
names="${names} ${w}"
done
local cword=${COMP_CWORD} cur=${COMP_WORDS[COMP_CWORD]}
case "$cur" in
-*)
COMPREPLY=( $(compgen -W "-h -n" -- "${cur}" ) )
;;
*)
local cmd_available="list add"
case "${cword}" in
1)
COMPREPLY=($(compgen -W "${cmd_available}" -- "${cur}"))
;;
*)
[[ "${cmd}" = "add" ]] || return 0
local available_args=$("${KAZ_BIN_DIR}/kazList.sh" "compose" "available" 2>/dev/null)
local used=$("${KAZ_BIN_DIR}/kazDockerNet.sh" "list" | grep "name:" | sed -e "s%\bname:\s*%%" -e "s%\bbridge\b\s*%%" -e "s%Net\b%%g")
local proposal item
for item in ${available_args} ; do
[[ " ${names} " =~ " ${item} " ]] || [[ " ${used} " =~ " ${item} " ]] || proposal="${proposal} ${item}"
done
COMPREPLY=($(compgen -W "${proposal}" -- "${cur}"))
;;
esac
;;
esac
return 0
}
complete -F _kazDockerNet_completion kazDockerNet.sh

83
bin/.kazList-completion.bash Executable file
View File

@ -0,0 +1,83 @@
#/usr/bin/env bash
_kazList_completions () {
#KAZ_ROOT=$(cd "$(dirname ${COMP_WORDS[0]})"/..; pwd)
COMPREPLY=()
local cword=${COMP_CWORD} cur=${COMP_WORDS[COMP_CWORD]} card=${#COMP_WORDS[@]} i w skip=0
for ((i=1 ; i<cword; i++)) ; do
w="${COMP_WORDS[i]}"
[[ "${w}" == -* ]] && ((skip++))
done
local arg_pos w i cmd= opt= names=
((arg_pos = cword - skip))
for ((i=1 ; i<card; i++)); do
w="${COMP_WORDS[i]}"
if [ -z "${cmd}" ]; then
[[ "${w}" == -* ]] || cmd="${w}"
continue
fi
if [ -z "${opt}" ]; then
[[ "${w}" == -* ]] || opt="${w}"
continue
fi
names="${names} ${w}"
done
#(echo "A cword:${cword} / arg_pos:${arg_pos} / card:${card} / cur:${cur} / cmd:${cmd} / opt:${opt} / names:${names} " >> /dev/pts/1)
case "${cur}" in
-*)
COMPREPLY=($(compgen -W "-h --help" -- "${cur}"))
;;
*)
local cmd_available="compose service"
local opt_available="available validate enable disable status"
case "${arg_pos}" in
1)
# $1 of kazList.sh
COMPREPLY=($(compgen -W "${cmd_available}" -- "${cur}"))
;;
2)
# $2 of kazList.sh
COMPREPLY=($(compgen -W "${opt_available}" -- "${cur}"))
;;
*)
# $3-* of kazList.sh
[[ " ${cmd_available} " =~ " ${cmd} " ]] || return 0
# select set of names
local names_set="${opt}"
local available_args
case "${cmd}" in
service)
case "${names_set}" in
available|validate)
return 0
;;
*)
available_args=$("${COMP_WORDS[0]}" "compose" "enable" "orga" 2>/dev/null)
;;
esac
;;
compose)
case "${names_set}" in
validate|enable|disable)
;;
*)
names_set="available"
;;
esac
available_args=$("${COMP_WORDS[0]}" "${cmd}" "${names_set}")
;;
esac
# remove previous selected target
local proposal item
for item in ${available_args} ; do
[[ " ${names} " =~ " ${item} " ]] || proposal="${proposal} ${item}"
done
COMPREPLY=($(compgen -W "${proposal}" -- "${cur}"))
;;
esac
esac
return 0
}
complete -F _kazList_completions kazList.sh

39
bin/.mvOrga2Nas-completion.bash Executable file
View File

@ -0,0 +1,39 @@
#/usr/bin/env bash
_mv_orga_nas_completion () {
KAZ_ROOT=$(cd "$(dirname ${COMP_WORDS[0]})"/..; pwd)
COMPREPLY=()
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
local cword=${COMP_CWORD} cur=${COMP_WORDS[COMP_CWORD]} card=${#COMP_WORDS[@]} i w skip=0
for ((i=1 ; i<cword; i++)) ; do
w="${COMP_WORDS[i]}"
[[ "${w}" == -* ]] && ((skip++))
done
local arg_pos w i names=
((arg_pos = cword - skip))
for ((i=1 ; i<card; i++)) ; do
w="${COMP_WORDS[i]}"
if [[ "${w}" == -* ]]; then
continue
fi
names="${names} ${w}"
done
local KAZ_LIST="${KAZ_BIN_DIR}/kazList.sh"
case "$cur" in
-*)
local proposal="-h -n"
COMPREPLY=( $(compgen -W "${proposal}" -- "${cur}" ) )
;;
*)
local available_orga=$("${KAZ_LIST}" "compose" "enable" "orga" 2>/dev/null | sed "s/-orga\b//g")
local proposal= item
for item in ${available_orga} ; do
[[ " ${names} " =~ " ${item} " ]] || proposal="${proposal} ${item}"
done
COMPREPLY=($(compgen -W "${proposal}" -- "${cur}"))
;;
esac
return 0
}
complete -F _mv_orga_nas_completion mvOrga2Nas.sh

63
bin/.orga-gen-completion.bash Executable file
View File

@ -0,0 +1,63 @@
#/usr/bin/env bash
_orga_gen_completion () {
KAZ_ROOT=$(cd "$(dirname ${COMP_WORDS[0]})"/../..; pwd)
ORGA_DIR=$(cd "$(dirname ${COMP_WORDS[0]})"; basename $(pwd))
COMPREPLY=()
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
local cword=${COMP_CWORD} cur=${COMP_WORDS[COMP_CWORD]} card=${#COMP_WORDS[@]} i w skip=0
for ((i=1 ; i<cword; i++)) ; do
w="${COMP_WORDS[i]}"
[[ "${w}" == -* ]] && ((skip++))
[[ "${w}" == +* ]] && ((skip++))
done
local arg_pos w i addOpt= rmOpt= names=
((arg_pos = cword - skip))
for ((i=1 ; i<card; i++)) ; do
w="${COMP_WORDS[i]}"
if [[ "${w}" == -* ]]; then
rmOpt="${rmOpt} ${w}"
continue
fi
if [[ "${w}" == '+'* ]]; then
addOpt="${addOpt} ${w}"
continue
fi
names="${names} ${w}"
done
local KAZ_LIST="${KAZ_BIN_DIR}/kazList.sh"
case "$cur" in
-*)
local available_services item proposal="-h -l" listOpt="available"
[ -n "${names}" ] && listOpt="enable ${names}"
[[ "${ORGA_DIR}" = "orgaTmpl" ]] || listOpt="enable ${ORGA_DIR%-orga}"
available_services=$("${KAZ_LIST}" service ${listOpt} 2>/dev/null | tr ' ' '\n' | sed "s/\(..*\)/-\1/")
for item in ${available_services} ; do
[[ " ${rmOpt} " =~ " ${item} " ]] || proposal="${proposal} ${item}"
done
COMPREPLY=( $(compgen -W "${proposal}" -- "${cur}" ) )
;;
'+'*)
local available_services item proposal= listOpt="available"
[ -n "${names}" ] && listOpt="disable ${names}"
[[ "${ORGA_DIR}" = "orgaTmpl" ]] || listOpt="disable ${ORGA_DIR%-orga}"
available_services=$("${KAZ_LIST}" service ${listOpt} 2>/dev/null | tr ' ' '\n' | sed "s/\(..*\)/+\1/")
for item in ${available_services} ; do
[[ " ${addOpt} " =~ " ${item} " ]] || proposal="${proposal} ${item}"
done
COMPREPLY=( $(compgen -W "${proposal}" -- "${cur}" ) )
;;
*)
[[ "${ORGA_DIR}" = "orgaTmpl" ]] || return 0;
local available_orga=$("${KAZ_LIST}" "compose" "enable" "orga" 2>/dev/null | sed "s/-orga\b//g")
local proposal= item
for item in ${available_orga} ; do
[[ " ${names} " =~ " ${item} " ]] || proposal="${proposal} ${item}"
done
COMPREPLY=($(compgen -W "${proposal}" -- "${cur}"))
;;
esac
return 0
}
complete -F _orga_gen_completion orga-gen.sh

11
bin/.updateLook-completion.bash Executable file
View File

@ -0,0 +1,11 @@
#/usr/bin/env bash
_update_look_nas_completion () {
COMPREPLY=()
local cur=${COMP_WORDS[COMP_CWORD]}
local THEMES=$(cd "$(dirname ${COMP_WORDS[0]})"/look ; ls -F -- '.' | grep '/$' | sed 's%/%%' | tr '\n' ' ' | sed 's% $%%')
COMPREPLY=($(compgen -W "${THEMES}" -- "${cur}"))
return 0
}
complete -F _update_look_nas_completion updateLook.sh

102
bin/applyTemplate.sh Executable file
View File

@ -0,0 +1,102 @@
#!/bin/bash
# Met à jour la configuration de ${CONF} en fonction du modèle ${TMPL}
# Viariables misent à jour :
# - __DOMAIN__
# Il est possible de prendre en considération ou d'occulter des blocks.
# Le début du block est repéré par une ligne contenant {{XXX
# La fin du block est repéré par une ligne contenant }}
# L'affiche est fonction de XXX
# XXX = on => affichage systématique
# XXX = off => masquage systématique
# XXX = compose => affichage si la variable d'environnement proxy_compose à la valeur on
KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
. "${DOCKERS_ENV}"
. "${KAZ_KEY_DIR}/SetAllPass.sh"
usage () {
echo $(basename "$0") " [-h] [-help] [-timestamp] template dst"
echo " -h"
echo " -help Display this help."
echo " -timestamp produce timestamp comment."
}
TIMESTAMP=""
case "$1" in
'-h' | '-help' )
usage
shift
exit;;
'-time' | '-timestamp' )
TIMESTAMP=YES
shift;;
esac
# no more export in .env
PROXY_VARS=$(set | grep "proxy_.*=")
for var in ${PROXY_VARS}
do
export ${var}
done
(
# $1 = template
# $2 = target
if [ "${TIMESTAMP}" == "YES" ]; then
echo "# Generated by $(pwd)$(basename $0)"
echo "# à partir du modèle $1"
echo "#" $(date "+%x %X")
echo
fi
sed \
-e "/^[ \t]*$/d"\
-e "/^[ ]*#.*$/d"\
-e "s|__CACHET_HOST__|${cachetHost}|g"\
-e "s|__CALC_HOST__|${calcHost}|g"\
-e "s|__CLOUD_HOST__|${cloudHost}|g"\
-e "s|__DATE_HOST__|${dateHost}|g"\
-e "s|__DOKUWIKI_HOST__|${dokuwikiHost}|g"\
-e "s|__DOMAIN__|${domain}|g"\
-e "s|__FILE_HOST__|${fileHost}|g"\
-e "s|__PAHEKO_API_PASSWORD__|${paheko_API_PASSWORD}|g"\
-e "s|__PAHEKO_API_USER__|${paheko_API_USER}|g"\
-e "s|__PAHEKO_HOST__|${pahekoHost}|g"\
-e "s|__GIT_HOST__|${gitHost}|g"\
-e "s|__GRAV_HOST__|${gravHost}|g"\
-e "s|__HTTP_PROTO__|${httpProto}|g"\
-e "s|__LDAP_HOST__|${ldapHost}|g"\
-e "s|__LDAPUI_HOST__|${ldapUIHost}|g"\
-e "s|__MATTER_HOST__|${matterHost}|g"\
-e "s|__OFFICE_HOST__|${officeHost}|g"\
-e "s|__PAD_HOST__|${padHost}|g"\
-e "s|__QUOTAS_HOST__|${quotasHost}|g"\
-e "s|__SMTP_HOST__|${smtpHost}|g"\
-e "s|__SYMPADB__|${sympaDBName}|g"\
-e "s|__SYMPA_HOST__|${sympaHost}|g"\
-e "s|__SYMPA_MYSQL_DATABASE__|${sympa_MYSQL_DATABASE}|g"\
-e "s|__SYMPA_MYSQL_PASSWORD__|${sympa_MYSQL_PASSWORD}|g"\
-e "s|__SYMPA_MYSQL_USER__|${sympa_MYSQL_USER}|g"\
-e "s|__VIGILO_HOST__|${vigiloHost}|g"\
-e "s|__WEBMAIL_HOST__|${webmailHost}|g"\
-e "s|__CASTOPOD_HOST__|${castopodHost}|g"\
-e "s|__IMAPSYNC_HOST__|${imapsyncHost}|g"\
-e "s|__YAKFORMS_HOST__|${yakformsHost}|g"\
-e "s|__WORDPRESS_HOST__|${wordpressHost}|g"\
-e "s|__MOBILIZON_HOST__|${mobilizonHost}|g"\
-e "s|__API_HOST__|${apiHost}|g"\
-e "s|__VAULTWARDEN_HOST__|${vaultwardenHost}|g"\
-e "s|__DOMAIN_SYMPA__|${domain_sympa}|g"\
$1 | awk '
BEGIN {cp=1}
/}}/ {cp=1 ; next};
/{{on/ {cp=1; next};
/{{off/ {cp=0; next};
match($0, /{{[a-zA-Z0-9_]+/) {cp=(ENVIRON["proxy_" substr($0,RSTART+2,RLENGTH)] == "on"); next};
{if (cp) print $0};'
) > $2

315
bin/checkEnvFiles.sh Executable file
View File

@ -0,0 +1,315 @@
#!/bin/bash
export KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
RUN_PASS_DIR="secret"
TMPL_PASS_DIR="secret.tmpl"
RUN_PASS_FILE="${RUN_PASS_DIR}/SetAllPass.sh"
TMPL_PASS_FILE="${TMPL_PASS_DIR}/SetAllPass.sh"
NEED_GEN=
########################################
usage () {
echo "Usage: $0 [-n] [-h]"
echo " -h help"
exit 1
}
case "$1" in
'-h' | '-help' )
usage
;;
esac
[ "$#" -eq 0 ] || usage
########################################
# check system
for prg in kompare; do
if ! type "${prg}" > /dev/null; then
printKazError "$0 need ${prg}"
echo "please run \"apt-get install ${prg}\""
exit
fi
done
cd "${KAZ_ROOT}"
########################################
# get lvalues in script
getVars () {
# $1 : filename
grep "^[^#]*=" $1 | sed 's/\([^=]*\).*/\1/' | sort -u
}
# get lvalues in script
getSettedVars () {
# $1 : filename
grep "^[^#]*=..*" $1 | grep -v '^[^#]*=".*--clean_val--.*"' | grep -v '^[^#]*="${' | sort -u
}
getVarFormVal () {
# $1 searched value
# $2 filename
grep "^[^#]*=$1" $2 | sed 's/\s*\([^=]*\).*/\1/'
}
########################################
# synchronized SetAllPass.sh (find missing lvalues)
updatePassFile () {
# $1 : ref filename
# $2 : target filename
REF_FILE="$1"
TARGET_FILE="$2"
NEED_UPDATE=
while : ; do
declare -a listRef listTarget missing
listRef=($(getVars "${REF_FILE}"))
listTarget=($(getVars "${TARGET_FILE}"))
missing=($(comm -23 <(printf "%s\n" ${listRef[@]}) <(printf "%s\n" ${listTarget[@]})))
if [ -n "${missing}" ]; then
echo "missing vars in ${YELLOW}${BOLD}${TARGET_FILE}${NC}:${RED}${BOLD}" ${missing[@]} "${NC}"
read -p "Do you want to add them? [y/n]: " yn
case $yn in
""|[Yy]*)
emacs "${REF_FILE}" "${TARGET_FILE}"
NEED_UPDATE=true
break
;;
[Nn]*)
break
;;
esac
else
break
fi
done
}
updatePassFile "${TMPL_PASS_FILE}" "${RUN_PASS_FILE}"
[ -n "${NEED_UPDATE}" ] && NEED_GEN=true
updatePassFile "${RUN_PASS_FILE}" "${TMPL_PASS_FILE}"
########################################
# check empty pass in TMPL_PASS_FILE
declare -a settedVars
settedVars=($(getSettedVars "${TMPL_PASS_FILE}"))
if [ -n "${settedVars}" ]; then
echo "unclear password in ${YELLOW}${BOLD}${TMPL_PASS_FILE}${NC}:${BLUE}${BOLD}"
for var in ${settedVars[@]}; do
echo -e "\t${var}"
done
echo "${NC}"
read -p "Do you want to clear them? [y/n]: " yn
case $yn in
""|[Yy]*)
emacs "${TMPL_PASS_FILE}"
;;
esac
fi
########################################
# check new files env-*
createMissingEnv () {
# $1 : ref dir
# $2 : target dir
REF_DIR="$1"
TARGET_DIR="$2"
NEED_UPDATE=
declare -a listRef listTarget missing
listRef=($(cd "${REF_DIR}"; ls -1 env-* | grep -v '~$'))
listTarget=($(cd "${TARGET_DIR}"; ls -1 env-* | grep -v '~$'))
missing=($(comm -23 <(printf "%s\n" ${listRef[@]}) <(printf "%s\n" ${listTarget[@]})))
for envFile in ${missing[@]}; do
read -p "Do you want to create ${GREEN}${BOLD}${TARGET_DIR}/${envFile}${NC}? [y/n]: " yn
case $yn in
""|[Yy]*)
cp "${REF_DIR}/${envFile}" "${TARGET_DIR}/${envFile}"
NEED_UPDATE=true
;;
esac
done
}
createMissingEnv "${RUN_PASS_DIR}" "${TMPL_PASS_DIR}"
[ -n "${NEED_UPDATE}" ] && NEED_GEN=true
createMissingEnv "${TMPL_PASS_DIR}" "${RUN_PASS_DIR}"
[ -n "${NEED_UPDATE}" ] && NEED_GEN=true
########################################
# check missing values in env-* between RUN and TMPL
declare -a listTmpl listRun listCommonFiles
listTmplFiles=($(cd "${TMPL_PASS_DIR}"; ls -1 env-* | grep -v '~$'))
listRunFiles=($(cd "${RUN_PASS_DIR}"; ls -1 env-* | grep -v '~$'))
listCommonFiles=($(comm -3 <(printf "%s\n" ${listTmplFiles[@]}) <(printf "%s\n" ${listRunFiles[@]})))
for envFile in ${listCommonFiles[@]}; do
while : ; do
TMPL_FILE="${TMPL_PASS_DIR}/${envFile}"
RUN_FILE="${RUN_PASS_DIR}/${envFile}"
declare -a listRef list2Target missingInRun missingInTmpl
listTmplVars=($(getVars "${TMPL_FILE}"))
listRunVars=($(getVars "${RUN_FILE}"))
missingInTmpl=($(comm -23 <(printf "%s\n" ${listTmplVars[@]}) <(printf "%s\n" ${listRunVars[@]})))
missingInRun=($(comm -13 <(printf "%s\n" ${listTmplVars[@]}) <(printf "%s\n" ${listRunVars[@]})))
if [ -n "${missingInTmpl}" ] || [ -n "${missingInRun}" ]; then
[ -n "${missingInTmpl}" ] &&
echo "missing vars in ${YELLOW}${BOLD}${TMPL_FILE}${NC}:${RED}${BOLD}" ${missingInTmpl[@]} "${NC}"
[ -n "${missingInRun}" ] &&
echo "missing vars in ${YELLOW}${BOLD}${RUN_FILE}${NC}:${RED}${BOLD}" ${missingInRun[@]} "${NC}"
read -p "Do you want to add them? [y/n]: " yn
case $yn in
""|[Yy]*)
emacs "${TMPL_FILE}" "${RUN_FILE}"
[ -n "${missingInTmpl}" ] && NEED_GEN=true
break
;;
[Nn]*)
break
;;
esac
else
break
fi
done
done
########################################
# check empty pass in env-*
for envFile in $(ls -1 "${TMPL_PASS_DIR}/"env-* | grep -v '~$'); do
settedVars=($(getSettedVars "${envFile}"))
if [ -n "${settedVars}" ]; then
echo "unclear password in ${GREEN}${BOLD}${envFile}${NC}:${BLUE}${BOLD}"
for var in ${settedVars[@]}; do
echo -e "\t${var}"
done
echo "${NC}"
read -p "Do you want to clear them? [y/n]: " yn
case $yn in
""|[Yy]*)
emacs "${envFile}"
;;
esac
fi
done
########################################
# check extention in dockers.env
declare -a missing
missing=($(for DIR in "${RUN_PASS_DIR}" "${TMPL_PASS_DIR}"; do
for envFile in $(ls -1 "${DIR}/"env-* | grep -v '~$'); do
val="${envFile#*env-}"
varName=$(getVarFormVal "${val}" "${DOCKERS_ENV}")
if [ -z "${varName}" ]; then
echo "${val}"
fi
done
done | sort -u))
if [ -n "${missing}" ]; then
echo "missing def in ${GREEN}${BOLD}${DOCKERS_ENV}${NC}:${BLUE}${BOLD}"
for var in ${missing[@]}; do
echo -e "\t${var}"
done
echo "${NC}"
read -p "Do you want to add them? [y/n]: " yn
case $yn in
""|[Yy]*)
emacs "${DOCKERS_ENV}"
;;
esac
fi
########################################
# check env-* in updateDockerPassword.sh
missing=($(for DIR in "${RUN_PASS_DIR}" "${TMPL_PASS_DIR}"; do
for envFile in $(ls -1 "${DIR}/"env-* | grep -v '~$'); do
val="${envFile#*env-}"
varName=$(getVarFormVal "${val}" "${DOCKERS_ENV}")
[ -z "${varName}" ] && continue
prefixe=$(grep "^\s*updateEnv.*${varName}" "${KAZ_BIN_DIR}/updateDockerPassword.sh" |
sed 's/\s*updateEnv[^"]*"\([^"]*\)".*/\1/' | sort -u)
if [ -z "${prefixe}" ]; then
echo "${envFile#*/}_(\${KAZ_KEY_DIR}/env-\${"${varName}"})"
fi
done
done | sort -u))
if [ -n "${missing}" ]; then
echo "missing update in ${GREEN}${BOLD}${KAZ_BIN_DIR}/updateDockerPassword.sh${NC}:${BLUE}${BOLD}"
for var in ${missing[@]}; do
echo -e "\t${var}"
done
echo "${NC}"
read -p "Do you want to add them? [y/n]: " yn
case $yn in
""|[Yy]*)
emacs "${KAZ_BIN_DIR}/updateDockerPassword.sh"
;;
esac
fi
########################################
# synchronized SetAllPass.sh and env-*
updateEnvFiles () {
# $1 secret dir
DIR=$1
listRef=($(getVars "${DIR}/SetAllPass.sh"))
missing=($(for envFile in $(ls -1 "${DIR}/"env-* | grep -v '~$'); do
val="${envFile#*env-}"
varName=$(getVarFormVal "${val}" "${DOCKERS_ENV}")
[ -z "${varName}" ] && continue
prefixe=$(grep "^\s*updateEnv.*${varName}" "${KAZ_BIN_DIR}/updateDockerPassword.sh" |
sed 's/\s*updateEnv[^"]*"\([^"]*\)".*/\1/' | sort -u)
[ -z "${prefixe}" ] && continue
listVarsInEnv=($(getVars "${envFile}"))
for var in ${listVarsInEnv[@]}; do
[[ ! " ${listRef[@]} " =~ " ${prefixe}_${var} " ]] && echo "${prefixe}_${var}"
done
# XXX doit exister dans SetAllPass.sh avec le prefixe
done))
if [ -n "${missing}" ]; then
echo "missing update in ${GREEN}${BOLD}${DIR}/SetAllPass.sh${NC}:${BLUE}${BOLD}"
for var in ${missing[@]}; do
echo -e "\t${var}"
done
echo "${NC}"
read -p "Do you want to add them? [y/n]: " yn
case $yn in
""|[Yy]*)
emacs "${DIR}/SetAllPass.sh"
;;
esac
fi
}
updateEnvFiles "${RUN_PASS_DIR}"
updateEnvFiles "${TMPL_PASS_DIR}"
# XXX chercher les variables non utilisées dans les SetAllPass.sh
if [ -n "${NEED_GEN}" ]; then
while : ; do
read -p "Do you want to generate blank values? [y/n]: " yn
case $yn in
""|[Yy]*)
"${KAZ_BIN_DIR}/secretGen.sh"
break
;;
[Nn]*)
break
;;
esac
done
fi
# XXX config/dockers.tmpl.env
# XXX ! vérifier init pour dockers.env

11
bin/checkEnvPassword.sh Executable file
View File

@ -0,0 +1,11 @@
#!/bin/bash
KAZ_ROOT=$(cd $(dirname $0)/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
for filename in "${KAZ_KEY_DIR}/"env-*Serv "${KAZ_KEY_DIR}/"env-*DB; do
if grep -q "^[^#=]*=\s*$" "${filename}" 2>/dev/null; then
echo "${filename}"
fi
done

24
bin/configKaz.sh Executable file
View File

@ -0,0 +1,24 @@
#!/bin/sh -e
. /usr/share/debconf/confmodule
db_version 2.0
if [ "$1" = "fset" ]; then
db_fset kaz/mode seen false
db_fset kaz/domain seen false
db_go
fi
if [ "$1" = "reset" ]; then
db_reset kaz/mode
db_reset kaz/domain
db_go
fi
#db_set kaz/domain test
db_title "a b c"
db_input critical kaz/mode
db_input critical kaz/domain
db_go

11
bin/configKaz.sh.templates Executable file
View File

@ -0,0 +1,11 @@
Template: kaz/mode
Type: select
Choices: prod, dev, local
Default: local
Description: Mode
Template: kaz/domain
Type: string
Description: domain name
Default: kaz.bzh

319
bin/container.sh Executable file
View File

@ -0,0 +1,319 @@
#!/bin/bash
# En cas d'absence de postfix, il faut lancer :
# docker network create postfix_mailNet
# démare/arrête un compose
# sauvegarde la base de données d'un compose
# met à jours les paramètres de configuration du mandataire (proxy)
KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
cd "${KAZ_BIN_DIR}"
PATH_SAUVE="/home/sauve/"
export SIMU=""
declare -a availableComposesNoNeedMail availableMailComposes availableComposesNeedMail availableProxyComposes availableOrga
availableComposesNoNeedMail=($(getList "${KAZ_CONF_DIR}/container-withoutMail.list"))
availableMailComposes=($(getList "${KAZ_CONF_DIR}/container-mail.list"))
availableComposesNeedMail=($(getList "${KAZ_CONF_DIR}/container-withMail.list"))
availableProxyComposes=($(getList "${KAZ_CONF_DIR}/container-proxy.list"))
availableOrga=($(getList "${KAZ_CONF_DIR}/container-orga.list"))
availableComposesNeedMail+=( "${availableOrga[@]}" )
knownedComposes+=( ${availableMailComposes[@]} )
knownedComposes+=( ${availableProxyComposes[@]} )
knownedComposes+=( ${availableComposesNoNeedMail[@]} )
knownedComposes+=( ${availableComposesNeedMail[@]} )
usage () {
echo "Usage: $0 [-n] {status|start|stop|save} [compose]..."
echo " -n : simulation"
echo " status : docker-compose status (default all compose available)"
echo " start : start composes (default all compose validate)"
echo " stop : stop composes (default all compose enable)"
echo " save : save all known database"
echo " [compose] : in ${knownedComposes[@]}"
exit 1
}
doCompose () {
# $1 dans ("up -d" "down")
# $2 nom du répertoire du compose
echo "compose: $1 $2"
${SIMU} cd "${KAZ_COMP_DIR}/$2"
if [ ! -h .env ] ; then
echo "create .env in $2"
${SIMU} ln -fs ../../config/dockers.env .env
fi
${SIMU} docker-compose $1
if [ "$2" = "cachet" ] && [ "$1" != "down" ]; then
NEW_KEY=$(cd "${KAZ_COMP_DIR}/$2" ; docker-compose logs | grep APP_KEY=base64: | sed "s/^.*'APP_KEY=\(base64:[^']*\)'.*$/\1/" | tail -1)
if [ -n "${NEW_KEY}" ]; then
printKazMsg "cachet key change"
# change key
${SIMU} sed -i \
-e 's%^\(\s*cachet_APP_KEY=\).*$%\1"'"${NEW_KEY}"'"%' \
"${KAZ_KEY_DIR}/SetAllPass.sh"
${SIMU} "${KAZ_BIN_DIR}/secretGen.sh"
# restart
${SIMU} docker-compose $1
fi
fi
}
doComposes () {
# $1 dans ("up -d" "down")
# $2+ nom des répertoires des composes
cmd=$1
shift
for compose in $@ ; do
doCompose "${cmd}" ${compose}
done
}
updateProxy () {
# $1 dans ("on" "off")
# $2 nom des répertoires des composes
cmd=$1
shift
echo "update proxy ${cmd}: $@"
date=$(date "+%x %X")
for compose in $@ ; do
composeFlag=${compose//-/_}
entry="proxy_${composeFlag}="
newline="${entry}${cmd} # update by $(basename $0) at ${date}"
if ! grep -q "proxy_${composeFlag}=" "${DOCKERS_ENV}" 2> /dev/null ; then
if [[ -n "${SIMU}" ]] ; then
echo "${newline} >> ${DOCKERS_ENV}"
else
echo "${newline}" >> "${DOCKERS_ENV}"
fi
else
${SIMU} sed -i \
-e "s|${entry}.*|${newline}|g" \
"${DOCKERS_ENV}"
fi
done
for item in "${availableProxyComposes[@]}"; do
${SIMU} ${KAZ_COMP_DIR}/${item}/proxy-gen.sh
done
}
saveDB () {
#attention, soucis avec l'option "-ti" qui ne semble pas rendre la main avec docker exec
containerName=$1
userName=$2
userPass=$3
dbName=$4
backName=$5
if [[ -n "${SIMU}" ]] ; then
${SIMU} "docker exec ${containerName} mysqldump --user=${userName} --password=${userPass} ${dbName} | gzip > $PATH_SAUVE${backName}.sql.gz"
else
docker exec ${containerName} mysqldump --user=${userName} --password=${userPass} ${dbName} | gzip > $PATH_SAUVE${backName}.sql.gz
fi
}
declare -a enableComposesNoNeedMail enableMailComposes enableComposesNeedMail enableProxyComposes
enableComposesNoNeedMail=()
enableMailComposes=()
enableComposesNeedMail=()
enableProxyComposes=()
startComposes () {
./kazDockerNet.sh add ${enableComposesNoNeedMail[@]} ${enableProxyComposes[@]} ${enableMailComposes[@]} ${enableComposesNeedMail[@]}
[ ${#enableComposesNeedMail[@]} -ne 0 ] && [[ ! "${enableMailComposes[@]}" =~ "postfix" ]] && ./kazDockerNet.sh add postfix
[[ "${enableComposesNeedMail[@]}" =~ "paheko" ]] && ${SIMU} ${KAZ_COMP_DIR}/paheko/paheko-gen.sh
doComposes "up -d" ${enableComposesNoNeedMail[@]}
doComposes "up -d" ${enableMailComposes[@]}
doComposes "up -d" ${enableComposesNeedMail[@]}
updateProxy "on" ${enableComposesNoNeedMail[@]} ${enableComposesNeedMail[@]}
doComposes "up -d" ${enableProxyComposes[@]}
for item in "${enableProxyComposes[@]}"; do
${SIMU} ${KAZ_COMP_DIR}/${item}/reload.sh
done
if grep -q "^.s*proxy_web.s*=.s*on" "${DOCKERS_ENV}" 2> /dev/null ; then
${SIMU} ${KAZ_COMP_DIR}/web/web-gen.sh
fi
}
stopComposes () {
updateProxy "off" ${enableComposesNoNeedMail[@]} ${enableComposesNeedMail[@]}
doComposes "down" ${enableProxyComposes[@]}
doComposes "down" ${enableComposesNeedMail[@]}
doComposes "down" ${enableMailComposes[@]}
doComposes "down" ${enableComposesNoNeedMail[@]}
if grep -q "^.s*proxy_web.s*=.s*on" "${DOCKERS_ENV}" 2> /dev/null ; then
${SIMU} ${KAZ_COMP_DIR}/web/web-gen.sh
fi
}
statusComposes () {
${KAZ_ROOT}/bin/kazList.sh compose status ${enableMailComposes[@]} ${enableProxyComposes[@]} ${enableComposesNoNeedMail[@]} ${enableComposesNeedMail[@]}
}
saveComposes () {
. "${DOCKERS_ENV}"
. "${KAZ_ROOT}/secret/SetAllPass.sh"
savedComposes+=( ${enableMailComposes[@]} )
savedComposes+=( ${enableProxyComposes[@]} )
savedComposes+=( ${enableComposesNoNeedMail[@]} )
savedComposes+=( ${enableComposesNeedMail[@]} )
for compose in ${savedComposes[@]}
do
case "${compose}" in
jirafeau)
# rien à faire (fichiers)
;;
ethercalc)
#inutile car le backup de /var/lib/docker/volumes/ethercalc_calcDB/_data/dump.rdb est suffisant
;;
#grav)
# ???
#;;
#postfix)
sympa)
echo "save sympa"
saveDB ${sympaDBName} "${sympa_MYSQL_USER}" "${sympa_MYSQL_PASSWORD}" "${sympa_MYSQL_DATABASE}" sympa
;;
web)
# rien à faire (fichiers)
;;
etherpad)
echo "save pad"
saveDB ${etherpadDBName} "${etherpad_MYSQL_USER}" "${etherpad_MYSQL_PASSWORD}" "${etherpad_MYSQL_DATABASE}" etherpad
;;
framadate)
echo "save date"
saveDB ${framadateDBName} "${framadate_MYSQL_USER}" "${framadate_MYSQL_PASSWORD}" "${framadate_MYSQL_DATABASE}" framadate
;;
cloud)
echo "save cloud"
saveDB ${nextcloudDBName} "${nextcloud_MYSQL_USER}" "${nextcloud_MYSQL_PASSWORD}" "${nextcloud_MYSQL_DATABASE}" nextcloud
;;
paheko)
# rien à faire (fichiers)
;;
mattermost)
echo "save mattermost"
saveDB ${mattermostDBName} "${mattermost_MYSQL_USER}" "${mattermost_MYSQL_PASSWORD}" "${mattermost_MYSQL_DATABASE}" mattermost
;;
dokuwiki)
# rien à faire (fichiers)
;;
*-orga)
ORGA=${compose%-orga}
echo "save ${ORGA}"
if grep -q "cloud:" "${KAZ_COMP_DIR}/${compose}/docker-compose.yml" 2> /dev/null ; then
echo " => cloud"
saveDB "${ORGA}-DB" "${nextcloud_MYSQL_USER}" "${nextcloud_MYSQL_PASSWORD}" "${nextcloud_MYSQL_DATABASE}" "${ORGA}-cloud"
fi
if grep -q "agora:" "${KAZ_COMP_DIR}/${compose}/docker-compose.yml" 2> /dev/null ; then
echo " => mattermost"
saveDB "${ORGA}-DB" "${mattermost_MYSQL_USER}" "${mattermost_MYSQL_PASSWORD}" "${mattermost_MYSQL_DATABASE}" "${ORGA}-mattermost"
fi
if grep -q "wordpress:" "${KAZ_COMP_DIR}/${compose}/docker-compose.yml" 2> /dev/null ; then
echo " => wordpress"
saveDB "${ORGA}-DB" "${wp_MYSQL_USER}" "${wp_MYSQL_PASSWORD}" "${wp_MYSQL_DATABASE}" "${ORGA}-wordpress"
fi
;;
esac
done
}
if [ "$#" -eq 0 ] ; then
usage
fi
if [ "$1" == "-h" ] ; then
usage
shift
fi
if [ "$1" == "-n" ] ; then
export SIMU=echo
shift
fi
DCK_CMD=""
SAVE_CMD=""
case "$1" in
start)
DCK_CMD="startComposes"
shift
;;
stop)
DCK_CMD="stopComposes"
shift
;;
save)
SAVE_CMD="saveComposes"
shift
;;
status)
DCK_CMD="statusComposes"
shift
;;
*)
usage
;;
esac
if [ $# -eq 0 ] ; then
enableComposesNoNeedMail=("${availableComposesNoNeedMail[@]}")
enableMailComposes=("${availableMailComposes[@]}")
enableComposesNeedMail=("${availableComposesNeedMail[@]}")
enableProxyComposes=("${availableProxyComposes[@]}")
else
if [ "${DCK_CMD}" = "startComposes" ] ; then
enableProxyComposes=("${availableProxyComposes[@]}")
fi
fi
for compose in $*
do
compose=${compose%/}
if [[ ! " ${knownedComposes[@]} " =~ " ${compose} " ]]; then
declare -a subst
subst=()
for item in "${knownedComposes[@]}"; do
[[ "${item}" =~ "${compose}" ]] && subst+=(${item})
done
if [ "${subst}" = "" ] ; then
echo
echo "Unknown compose: ${RED}${BOLD}${compose}${NC} not in ${YELLOW}${BOLD}${knownedComposes[*]}${NC}"
echo
exit 1
else
echo "substitute compose: ${YELLOW}${BOLD}${compose} => ${subst[@]}${NC}"
fi
fi
for item in "${availableMailComposes[@]}"; do
[[ "${item}" =~ "${compose}" ]] && enableMailComposes+=("${item}")
done
for item in "${availableProxyComposes[@]}"; do
[[ "${item}" =~ "${compose}" ]] && enableProxyComposes=("${item}")
done
for item in "${availableComposesNoNeedMail[@]}"; do
[[ "${item}" =~ "${compose}" ]] && enableComposesNoNeedMail+=("${item}")
done
for item in "${availableComposesNeedMail[@]}"; do
[[ "${item}" =~ "${compose}" ]] && enableComposesNeedMail+=("${item}")
done
done
[[ ! -z "${DCK_CMD}" ]] && "${DCK_CMD}" && exit 0
[[ ! -z "${SAVE_CMD}" ]] && "${SAVE_CMD}" && exit 0
exit 1

104
bin/createEmptyPasswd.sh Executable file
View File

@ -0,0 +1,104 @@
#!/bin/bash
cd $(dirname $0)/..
mkdir -p emptySecret
rsync -aHAX --info=progress2 --delete secret/ emptySecret/
cd emptySecret/
. ../config/dockers.env
. ./SetAllPass.sh
# pour mise au point
# SIMU=echo
cleanEnvDB(){
# $1 = prefix
# $2 = envName
# $3 = containerName of DB
rootPass="--root_password--"
dbName="--database_name--"
userName="--user_name--"
userPass="--user_password--"
${SIMU} sed -i \
-e "s/MYSQL_ROOT_PASSWORD=.*/MYSQL_ROOT_PASSWORD=${rootPass}/g" \
-e "s/MYSQL_DATABASE=.*/MYSQL_DATABASE=${dbName}/g" \
-e "s/MYSQL_USER=.*/MYSQL_USER=${userName}/g" \
-e "s/MYSQL_PASSWORD=.*/MYSQL_PASSWORD=${userPass}/g" \
"$2"
}
cleanEnv(){
# $1 = prefix
# $2 = envName
for varName in $(grep "^[a-zA-Z_]*=" $2 | sed "s/^\([^=]*\)=.*/\1/g")
do
srcName="$1_${varName}"
srcVal="--clean_val--"
${SIMU} sed -i \
-e "s~^[ ]*${varName}=.*$~${varName}=${srcVal}~" \
"$2"
done
}
cleanPasswd(){
${SIMU} sed -i \
-e 's/^\([# ]*[^#= ]*\)=".[^{][^"]*"/\1="--clean_val--"/g' \
./SetAllPass.sh
}
####################
# main
# read -r -p "Do you want to remove all password? [Y/n] " input
# case $input in
# [yY][eE][sS]|[yY])
# echo "Remove all password"
# ;;
# [nN][oO]|[nN])
# echo "Abort"
# ;;
# *)
# echo "Invalid input..."
# exit 1
# ;;
# esac
cleanPasswd
cleanEnvDB "etherpad" "./env-${etherpadDBName}" "${etherpadDBName}"
cleanEnvDB "framadate" "./env-${framadateDBName}" "${framadateDBName}"
cleanEnvDB "git" "./env-${gitDBName}" "${gitDBName}"
cleanEnvDB "mattermost" "./env-${mattermostDBName}" "${mattermostDBName}"
cleanEnvDB "nextcloud" "./env-${nextcloudDBName}" "${nextcloudDBName}"
cleanEnvDB "roundcube" "./env-${roundcubeDBName}" "${roundcubeDBName}"
cleanEnvDB "sso" "./env-${ssoDBName}" "${ssoDBName}"
cleanEnvDB "sympa" "./env-${sympaDBName}" "${sympaDBName}"
cleanEnvDB "vigilo" "./env-${vigiloDBName}" "${vigiloDBName}"
cleanEnvDB "wp" "./env-${wordpressDBName}" "${wordpressDBName}"
cleanEnv "etherpad" "./env-${etherpadServName}"
cleanEnv "gandi" "./env-gandi"
cleanEnv "jirafeau" "./env-${jirafeauServName}"
cleanEnv "mattermost" "./env-${mattermostServName}"
cleanEnv "nextcloud" "./env-${nextcloudServName}"
cleanEnv "office" "./env-${officeServName}"
cleanEnv "roundcube" "./env-${roundcubeServName}"
cleanEnv "sso" "./env-${ssoServName}"
cleanEnv "vigilo" "./env-${vigiloServName}"
cleanEnv "wp" "./env-${wordpressServName}"
cat > allow_admin_ip <<EOF
# ip for admin access only
# local test
allow 127.0.0.0/8;
allow 192.168.0.0/16;
EOF
chmod -R go= .
chmod -R +X .

16
bin/createSrcDocker.sh Executable file
View File

@ -0,0 +1,16 @@
#!/bin/bash
cd $(dirname $0)
./setOwner.sh
./createEmptyPasswd.sh
cd ../..
FILE_NAME="/tmp/$(date +'%Y%M%d')-KAZ.tar.bz2"
tar -cjf "${FILE_NAME}" --transform s/emptySecret/secret/ \
./kaz/emptySecret/ ./kaz/bin ./kaz/config ./kaz/dockers
ls -l "${FILE_NAME}"

796
bin/createUser.sh Executable file
View File

@ -0,0 +1,796 @@
#!/bin/bash
# kan: 30/03/2021
# koi: créer les users dans le système KAZ, le KazWorld, to become a kaznaute, a kaaaaaaaznaute!
# ki : fab
# test git du 02/10/2023 depuis snster
# !!! need by htpasswd
# apt-get install apache2-utils dos2unix
# rechercher tous les TODO du script pour le reste à faire
##########################################################
# fonctionnement :
# vérification de l'existence du fichier des demandes et création si absent
# on garnit les variables
# on vérifie les variables
# on créé un mdp utilisable par tous les services (identifiant : email kaz)
# pour chacun des services KAZ (NC / WP / DOKUWIKI)
# * on vérifie si le sous-domaine existe, on le créé sinon
# * on créé le user et le met admin si nécessaire
# * s'il existe déjà, rollback (y compris sur les autres services)
# pour paheko, on vérifie si le sous-domaine existe, on le créé sinon
# pour mattermost, on créé le user et l'équipe si nécé=essaire, sur l'agora de base
# tout est ok, on créé l'email
# on créé le mail d'inscription avec tout le détail des services créés (url/user)
# on inscrit le user dans la liste infos@${domain_sympa}
# on avertit contact@kaz.bzh et on post dans l'agora/creation_compte
# TODO : utilisez la req sql pour attaquer paheko et créer createUser.txt en auto et modifier le champ dans paheko ACTION de "à créer" à "aucune"
# on récupère toutes les variables et mdp
# on prend comme source des repertoire le dossier du dessus ( /kaz dans notre cas )
KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
cd "${KAZ_ROOT}"
. "${DOCKERS_ENV}"
. "${KAZ_KEY_DIR}/SetAllPass.sh"
# DOCK_DIR="${KAZ_COMP_DIR}" # ???
SETUP_MAIL="docker exec -ti mailServ setup"
# on détermine le script appelant, le fichier log et le fichier source, tous issus de la même racine
PRG=$(basename $0)
RACINE=${PRG%.sh}
CREATE_ORGA_CMD="${KAZ_CONF_DIR}/orgaTmpl/orga-gen.sh"
mkdir -p "${KAZ_ROOT}/tmp" "${KAZ_ROOT}/log"
# fichier source dans lequel se trouve les infos sur les utilisateurs à créer
FILE="${KAZ_ROOT}/tmp/${RACINE}.txt"
# fichier de log pour
LOG="${KAZ_ROOT}/log/${RACINE}.log"
# TODO : risque si 2 admins lance en même temps
CMD_LOGIN="${KAZ_ROOT}/tmp/${RACINE}_cmds_to_run-1-LOGIN.sh"
CMD_SYMPA="${KAZ_ROOT}/tmp/${RACINE}_cmds_to_run-2-SYMPA.sh"
CMD_ORGA="${KAZ_ROOT}/tmp/${RACINE}_cmds_to_run-3-ORGA.sh"
CMD_PROXY="${KAZ_ROOT}/tmp/${RACINE}_cmds_to_run-4-PROXY.sh"
CMD_FIRST="${KAZ_ROOT}/tmp/${RACINE}_cmds_to_run-5-FIRST.sh"
CMD_INIT="${KAZ_ROOT}/tmp/${RACINE}_cmds_to_run-6-INIT.sh"
CMD_PAHEKO="${KAZ_ROOT}/tmp/${RACINE}_cmds_to_run-7-PAHEKO.sh"
CMD_MSG="${KAZ_ROOT}/tmp/${RACINE}_cmds_to_run-8-MSG.sh"
TEMP_PAHEKO="${KAZ_ROOT}/tmp/${RACINE}.TEMP_PHAEKO.cvs"
URL_SITE="${domain}"
URL_WEBMAIL="${webmailHost}.${domain}"
URL_LISTE="${sympaHost}.${domain}"
URL_AGORA="${matterHost}.${domain}"
URL_MDP="${ldapUIHost}.${domain}"
# URL_PAHEKO="kaz-${pahekoHost}.${domain}"
URL_PAHEKO="${httpProto}://${paheko_API_USER}:${paheko_API_PASSWORD}@kaz-paheko.${domain}"
availableProxyComposes=($(getList "${KAZ_CONF_DIR}/container-proxy.list"))
NL_LIST=infos@${domain_sympa}
# indiqué dans le mail d'inscription
# (mail+cloud base+agora : max=3, min=2)
NB_SERVICES_BASE=0
# max : 5, min : 0
NB_SERVICES_DEDIES=0
# note qu'on rajoute dans le mail pour les orgas
MESSAGE_MAIL_ORGA_1=""
MESSAGE_MAIL_ORGA_2=""
MESSAGE_MAIL_ORGA_3=""
############################
# Traitement des arguments #
############################
CREATE_ORGA="true"
SIMULATION=YES
usage () {
echo "${PRG} [-h] [-s] [-e] [-v] [-u]"
echo " version 1.0"
echo " Create users in kaz world using ${FILE} as source file. All logs in ${LOG}"
echo " -h Display this help."
echo " -s Simulate. none user created but you can see the result in ${KAZ_ROOT}/tmp/${RACINE}_cmds_to_run-*.sh"
echo " -e Execute commands. user or orga will be created !!!"
echo " -u create only user (don't create orga)"
echo " -v or -V display site informations"
}
for ARG in $*; do
case "${ARG}" in
'-h' | '-help' )
usage
shift
exit;;
-s)
shift;;
-e)
SIMULATION=NO
shift;;
-u)
# only user => no orga
CREATE_ORGA=
shift;;
'-v' | '-V' )
echo "${PRG}, root : ${KAZ_ROOT}, on domain : ${URL_SITE}"
exit
;;
*)
usage
echo "${PRG}: ${RED}unknown parameter${NC}"
shift
exit;;
esac
done
##################################
# Inventaire des comptes à créer #
##################################
# la recherche des comptes à créé avec la commande :
# bin/interoPaheko.sh
# création d'un fichier vide
# TODO : même code ici et dans interoPaheko.sh => risque de divergence
if [ ! -s "${FILE}" ];then
echo "${RED}"
echo "ERREUR : le fichier ${FILE} n'existait pas"
echo "Il vient d'être créé. Vous pouvez le compléter."
echo "${NC}"
cat > "${FILE}" <<EOF
# -- fichier de création des comptes KAZ
# --
# -- 1 ligne par compte
# -- champs séparés par ";". les espaces en début et en fin sont enlevés
# -- laisser vide si pas de donnée
# -- pas d'espace dans les variables
# --
# -- ORGA : nom de l'organisation (max 23 car), vide sinon
# -- ADMIN_ORGA : O/N indique si le user est admin de l'orga (va le créer comme admin du NC de l'orga et admin de l'équipe agora)
# -- NC_ORGA : O/N indique si l'orga a demandé un NC
# -- PAHEKO_ORGA : O/N indique si l'orga a demandé un paheko
# -- WP_ORGA : O/N indique si l'orga a demandé un wp
# -- AGORA_ORGA : O/N indique si l'orga a demandé un mattermost
# -- WIKI_ORGA : O/N indique si l'orga a demandé un wiki
# -- NC_BASE : O/N indique si le user doit être inscrit dans le NC de base
# -- GROUPE_NC_BASE : soit null soit le groupe dans le NC de base
# -- EQUIPE_AGORA : soit null soit equipe agora (max 23 car)
# -- QUOTA = (1/10/20/...) en GB
# --
# NOM ; PRENOM ; EMAIL_SOUHAITE ; EMAIL_SECOURS ; ORGA ; ADMIN_ORGA ; NC_ORGA ; PAHEKO_ORGA ; WP_ORGA ; AGORA_ORGA ; WIKI_ORGA ; NC_BASE ; GROUPE_NC_BASE ; EQUIPE_AGORA ; QUOTA
# exemple pour un compte découverte :
# loufoque ; le_mec; loufoque.le-mec@kaz.bzh ; gregomondo@kaz.bzh; ; N; N; N; N; N; N;N;;; 1
# exemple pour un compte asso de l'orga gogol avec le service dédié NC uniquement + une équipe dans l'agora
# loufoque ; le_mec; loufoque.le-mec@kaz.bzh ; gregomondo@kaz.bzh; gogol ; O; O; N; N; N; N;N;;gogol_team; 10
EOF
exit
fi
ALL_LINES=$(sed -e "/^[ \t]*#.*$/d" -e "/^[ \t]*$/d" "${FILE}")
if [ -z "${ALL_LINES}" ];then
usage
echo "${PRG}: ${RED}nothing to do in ${FILE}${NC}"
exit
fi
###################
# Initialisations #
###################
# emails et les alias KAZ déjà créés
TFILE_EMAIL="$(mktemp /tmp/${RACINE}.XXXXXXXXX.TFILE_EMAIL)"
# comptes mattermost
TFILE_MM="$(mktemp /tmp/${RACINE}.XXXXXXXXX.TFILE_MM)"
# l'ident NextCloud
TEMP_USER_NC="$(mktemp /tmp/${RACINE}.XXXXXXXXX.TEMP_USER_NC)"
# le groupe NextCloud
TEMP_GROUP_NC="$(mktemp /tmp/${RACINE}.XXXXXXXXX.TEMP_GROUP_NC)"
# l'ident WP
TEMP_USER_WP="$(mktemp /tmp/${RACINE}.XXXXXXXXX.TEMP_USER_WP)"
trap "rm -f '${TFILE_EMAIL}' '${TFILE_MM}' '${TEMP_USER_NC}' '${TEMP_GROUP_NC}' '${TEMP_USER_WP}'" 0 1 2 3 15
for i in "${CMD_LOGIN}" "${CMD_SYMPA}" "${CMD_ORGA}" "${CMD_PROXY}" "${CMD_FIRST}" "${CMD_INIT}" "${CMD_PAHEKO}" "${CMD_MSG}"; do
echo "#!/bin/bash" > "${i}" && chmod +x "${i}"
done
echo "numero,nom,quota_disque,action_auto" > "${TEMP_PAHEKO}"
echo "curl \"https://${paheko_API_USER}:${paheko_API_PASSWORD}@kaz-paheko.kaz.bzh/api/user/import\" -T \"${TEMP_PAHEKO}\"" >> "${CMD_PAHEKO}"
#echo "récupération des login postfix... "
## on stocke les emails et les alias KAZ déjà créés
#(
# ${SETUP_MAIL} email list
# ${SETUP_MAIL} alias list
#) | cut -d ' ' -f 2 | grep @ | sort > "${TFILE_EMAIL}"
# did on supprime le ^M en fin de fichier pour pas faire planter les grep
#dos2unix "${TFILE_EMAIL}"
echo "on récupère tous les emails (secours/alias/kaz) sur le ldap"
FILE_LDIF=/home/sauve/ldap.ldif
/kaz/bin/ldap/ldap_sauve.sh
gunzip ${FILE_LDIF}.gz -f
grep -aEiorh '([[:alnum:]]+([._-][[:alnum:]]+)*@[[:alnum:]]+([._-][[:alnum:]]+)*\.[[:alpha:]]{2,6})' ${FILE_LDIF} | sort -u > ${TFILE_EMAIL}
echo "récupération des login mattermost... "
docker exec -ti mattermostServ bin/mmctl user list --all | grep ":.*(" | cut -d ':' -f 2 | cut -d ' ' -f 2 | sort > "${TFILE_MM}"
dos2unix "${TFILE_MM}"
echo "done"
# se connecter à l'agora pour ensuite pouvoir passer toutes les commandes mmctl
echo "docker exec -ti mattermostServ bin/mmctl auth login ${httpProto}://${URL_AGORA} --name local-server --username ${mattermost_user} --password ${mattermost_pass}" | tee -a "${CMD_INIT}"
# vérif des emails
regex="^(([A-Za-z0-9]+((\.|\-|\_|\+)?[A-Za-z0-9]?)*[A-Za-z0-9]+)|[A-Za-z0-9]+)@(([A-Za-z0-9]+)+((\.|\-|\_)?([A-Za-z0-9]+)+)*)+\.([A-Za-z]{2,})+$"
function validator {
if ! [[ "$1" =~ ${regex} ]]; then
# printf "* %-48s \e[1;31m[fail]\e[m\n" "${1}"
(
echo
echo "ERREUR : le paramètre ${RED}${BOLD}$1 n'est pas un email valide${NC} - on stoppe tout - aucun utilisateur de créé"
echo
) | tee -a "${LOG}"
exit 1
fi
}
######################################
# Boucle lecture des comptes à créer #
######################################
echo -e "$(date '+%Y-%m-%d %H:%M:%S') : ${PRG} - sauvegarde des utilisateurs à créer" | tee "${LOG}"
cat "${FILE}" >> "${LOG}"
LDAP_IP=$(docker inspect -f '{{.NetworkSettings.Networks.ldapNet.IPAddress}}' ldapServ)
ALL_ORGA=
while read ligne; do
# | xargs permet de faire un trim
NOM=$(awk -F ";" '{print $1}' <<< "${ligne}" | xargs)
PRENOM=$(awk -F ";" '{print $2}' <<< "${ligne}" | xargs)
declare -A tab_email
tab_email[EMAIL_SOUHAITE]=$(awk -F ";" '{print $3}' <<< "${ligne}" | xargs)
tab_email[EMAIL_SECOURS]=$(awk -F ";" '{print $4}' <<< "${ligne}" | xargs)
ORGA=$(awk -F ";" '{print $5}' <<< "${ligne}" | xargs)
ORGA=${ORGA,,}
declare -A service
service[ADMIN_ORGA]=$(awk -F ";" '{print $6}' <<< "${ligne}" | xargs)
service[NC_ORGA]=$(awk -F ";" '{print $7}' <<< "${ligne}" | xargs)
service[PAHEKO_ORGA]=$(awk -F ";" '{print $8}' <<< "${ligne}" | xargs)
service[WP_ORGA]=$(awk -F ";" '{print $9}' <<< "${ligne}" | xargs)
service[AGORA_ORGA]=$(awk -F ";" '{print $10}' <<< "${ligne}" | xargs)
service[WIKI_ORGA]=$(awk -F ";" '{print $11}' <<< "${ligne}" | xargs)
service[NC_BASE]=$(awk -F ";" '{print $12}' <<< "${ligne}" | xargs)
GROUPE_NC_BASE=$(awk -F ";" '{print $13}' <<< "${ligne}" | xargs)
GROUPE_NC_BASE="${GROUPE_NC_BASE,,}"
EQUIPE_AGORA=$(awk -F ";" '{print $14}' <<< "${ligne}" | xargs)
EQUIPE_AGORA=${EQUIPE_AGORA,,}
QUOTA=$(awk -F ";" '{print $15}' <<< "${ligne}" | xargs)
PASSWORD=$(awk -F ";" '{print $16}' <<< "${ligne}" | xargs)
IDENT_KAZ=$(unaccent utf8 "${PRENOM,,}.${NOM,,}")
EMAIL_SOUHAITE=${tab_email[EMAIL_SOUHAITE]}
EMAIL_SECOURS=${tab_email[EMAIL_SECOURS]}
echo -e "${NL}***************************** traitement de ${ligne}" | tee -a "${LOG}"
###########################
# Vérification des champs #
###########################
for k in "${!tab_email[@]}"; do
validator "${tab_email[${k}]}"
done
# vérif des champs O/N
for k in "${!service[@]}"; do
if [ "${service[${k}]}" != "O" -a "${service[${k}]}" != "N" ]; then
(
echo "${RED}"
echo "${k} : ${service[${k}]}"
echo "ERREUR : le paramètre ${k} accepte O ou N - on stoppe tout - aucun utilisateur de créé"
echo "${NC}"
) | tee -a "${LOG}"
exit 1
fi
done
# taille ORGA et EQUIPE_AGORA
TAILLE_MAX="23"
if [ "${#ORGA}" -gt "${TAILLE_MAX}" ]; then
(
echo "${RED}"
echo "ERREUR : le paramètre ORGA est trop grand : ${ORGA} , taille max : ${TAILLE_MAX} - on stoppe tout - aucun utilisateur de créé"
echo "${NC}"
) | tee -a "${LOG}"
exit 1
fi
if [ "${#ORGA}" -gt "0" ]; then
if [[ "${ORGA}" =~ ^[[:alnum:]-]+$ ]]; then
echo "ok"
else
(
echo "${RED}"
echo "ERREUR : le paramètre ORGA ne contient pas les caractères autorisés : ${ORGA} - on stoppe tout - aucun utilisateur de créé"
echo "${NC}"
) | tee -a "${LOG}"
exit 1
fi
fi
if [ "${#EQUIPE_AGORA}" -gt "${TAILLE_MAX}" ]; then
(
echo "${RED}"
echo "ERREUR : le paramètre EQUIPE_AGORA est trop grand : ${EQUIPE_AGORA} , taille max : ${TAILLE_MAX} - on stoppe tout - aucun utilisateur de créé"
echo "${NC}"
) | tee -a "${LOG}"
exit 1
fi
# vérif quota est entier
if ! [[ "${QUOTA}" =~ ^[[:digit:]]+$ ]]; then
(
echo
echo "ERREUR : ${RED}${BOLD}QUOTA n'est pas numérique : ${QUOTA}${NC} - on stoppe tout - aucun utilisateur de créé"
) | tee -a "${LOG}"
fi
####################################################
# cree un mdp acceptable par postfix/nc/mattermost #
####################################################
if [ -z ${PASSWORD} ]; then
PASSWORD=_`apg -n 1 -m 10 -M NCL -d`_
fi
SEND_MSG_CREATE=
if [ -n "${ORGA}" -a -z "${CREATE_ORGA}" ]; then
# skeep orga
continue
fi
####################################################################
# TODO: Test de l'identKAZ du ldap, il faut l'unicité. si KO, STOP #
####################################################################
###################################
# Création du compe de messagerie #
###################################
# le mail existe t-il déjà ?
if grep -q "^${EMAIL_SOUHAITE}$" "${TFILE_EMAIL}"; then
echo "${EMAIL_SOUHAITE} existe déjà" | tee -a "${LOG}"
else
SEND_MSG_CREATE=true
echo "${EMAIL_SOUHAITE} n'existe pas" | tee -a "${LOG}"
echo "${SETUP_MAIL} email add ${EMAIL_SOUHAITE} ${PASSWORD}" | tee -a "${CMD_LOGIN}"
echo "${SETUP_MAIL} quota set ${EMAIL_SOUHAITE} ${QUOTA}G" | tee -a "${CMD_LOGIN}"
# LDAP, à tester
user=$(echo ${EMAIL_SOUHAITE} | awk -F '@' '{print $1}')
domain=$(echo ${EMAIL_SOUHAITE} | awk -F '@' '{print $2}')
pass=$(mkpasswd -m sha512crypt ${PASSWORD})
echo "echo -e '\n\ndn: cn=${EMAIL_SOUHAITE},ou=users,${ldap_root}\n\
changeType: add\n\
objectclass: inetOrgPerson\n\
objectClass: PostfixBookMailAccount\n\
objectClass: nextcloudAccount\n\
objectClass: kaznaute\n\
sn: ${PRENOM} ${NOM}\n\
mail: ${EMAIL_SOUHAITE}\n\
mailEnabled: TRUE\n\
mailGidNumber: 5000\n\
mailHomeDirectory: /var/mail/${domain}/${user}/\n\
mailQuota: ${QUOTA}G\n\
mailStorageDirectory: maildir:/var/mail/${domain}/${user}/\n\
mailUidNumber: 5000\n\
mailDeSecours: ${EMAIL_SECOURS}\n\
identifiantKaz: ${IDENT_KAZ}\n\
quota: ${QUOTA}\n\
nextcloudEnabled: TRUE\n\
nextcloudQuota: ${QUOTA} GB\n\
mobilizonEnabled: TRUE\n\
agoraEnabled: TRUE\n\
userPassword: {CRYPT}${pass}\n\n' | ldapmodify -c -H ldap://${LDAP_IP} -D \"cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}\" -x -w ${ldap_LDAP_ADMIN_PASSWORD}" | tee -a "${CMD_LOGIN}"
fi
#userPassword: {CRYPT}\$6\$${pass}\n\n\" | ldapmodify -c -H ldap://${LDAP_IP} -D \"cn=${ldap_LDAP_CONFIG_ADMIN_USERNAME},${ldap_root}\" -x -w ${ldap_LDAP_CONFIG_ADMIN_PASSWORD}" | tee -a "${CMD_LOGIN}"
CREATE_ORGA_SERVICES=""
#############
# NEXTCLOUD #
#############
# on recalcul l'url de NC
if [ "${ORGA}" != "" -a "${service[NC_ORGA]}" == "O" ]; then
URL_NC="${ORGA}-${cloudHost}.${domain}"
# si le cloud de l'orga n'est pas up alors on le créé
nb=$(docker ps | grep "${ORGA}-${cloudHost}" | wc -l)
if [ "${nb}" == "0" ];then
echo " * +cloud +collabora ${ORGA}"
CREATE_ORGA_SERVICES="${CREATE_ORGA_SERVICES} +cloud +collabora"
# installe les plugins initiaux dans "/kaz/bin/gestClouds.sh"
fi
NB_SERVICES_DEDIES=$((NB_SERVICES_DEDIES+1))
else
URL_NC="${cloudHost}.${domain}"
NB_SERVICES_BASE=$((NB_SERVICES_BASE+1))
fi
MESSAGE_MAIL_ORGA_1="${MESSAGE_MAIL_ORGA_1}${NL}* un bureau virtuel pour stocker des fichiers/calendriers/contacts et partager avec vos connaissances : ${httpProto}://${URL_NC}"
# le user existe t-il déjà sur NC ?
curl -o "${TEMP_USER_NC}" -X GET -H 'OCS-APIRequest:true' "${httpProto}://admin:${nextcloud_NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/users?search=${IDENT_KAZ}"
if grep -q "<element>${IDENT_KAZ}</element>" "${TEMP_USER_NC}"; then
echo "${IDENT_KAZ} existe déjà sur ${URL_NC}" | tee -a "${LOG}"
else
# on créé l'utilisateur sur NC sauf si c'est le NC général, on ne créé jamais l'utilisateur7
if [ ${URL_NC} != "${cloudHost}.${domain}" ]; then
echo "curl -X POST -H 'OCS-APIRequest:true' ${httpProto}://admin:${nextcloud_NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/users \
-d userid='${IDENT_KAZ}' \
-d displayName='${PRENOM} ${NOM}' \
-d password='${PASSWORD}' \
-d email='${EMAIL_SOUHAITE}' \
-d quota='${QUOTA}GB' \
-d language='fr' \
" | tee -a "${CMD_INIT}"
fi
# s'il est admin de son orga, on le met admin
if [ "${service[ADMIN_ORGA]}" == "O" -a "${ORGA}" != "" -a "${service[NC_ORGA]}" == "O" ]; then
echo "curl -X POST -H 'OCS-APIRequest:true' ${httpProto}://${nextcloud_NEXTCLOUD_ADMIN_USER}:${nextcloud_NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/users/${IDENT_KAZ}/groups -d groupid='admin'" | tee -a "${CMD_INIT}"
fi
# faut-il mettre le user NC dans un groupe particulier sur le NC de base ?
if [ "${GROUPE_NC_BASE}" != "" -a "${service[NC_BASE]}" == "O" ]; then
# le groupe existe t-il déjà ?
curl -o "${TEMP_GROUP_NC}" -X GET -H 'OCS-APIRequest:true' "${httpProto}://admin:${nextcloud_NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/groups?search=${GROUPE_NC_BASE}"
nb=$(grep "<element>${GROUPE_NC_BASE}</element>" "${TEMP_GROUP_NC}" | wc -l)
if [ "${nb}" == "0" ];then
echo "curl -X POST -H 'OCS-APIRequest:true' ${httpProto}://admin:${nextcloud_NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/groups -d groupid=${GROUPE_NC_BASE}" | tee -a "${CMD_INIT}"
fi
# puis attacher le user au groupe
echo "curl -X POST -H 'OCS-APIRequest:true' ${httpProto}://admin:${nextcloud_NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/users/${IDENT_KAZ}/groups -d groupid=${GROUPE_NC_BASE}" | tee -a "${CMD_INIT}"
fi
fi
#############
# WORDPRESS #
#############
# TODO : pour l'utilisation de l'api : https://www.hostinger.com/tutorials/wordpress-rest-api
if [ "${ORGA}" != "" -a "${service[WP_ORGA]}" == "O" ]; then
URL_WP_ORGA="${ORGA}-${wordpressHost}.${domain}"
# si le wp de l'orga n'est pas up alors on le créé
nb=$(docker ps | grep "${ORGA}-${wordpressHost}" | wc -l)
if [ "${nb}" == "0" ];then
echo " * +wp ${ORGA}"
CREATE_ORGA_SERVICES="${CREATE_ORGA_SERVICES} +wp"
fi
NB_SERVICES_DEDIES=$((NB_SERVICES_DEDIES+1))
MESSAGE_MAIL_ORGA_1="${MESSAGE_MAIL_ORGA_1}${NL}* un site web de type wordpress : ${httpProto}://${URL_WP_ORGA}/wp-admin/"
# TODO : vérif existance user
# # le user existe t-il déjà sur le wp ?
# curl -o "${TEMP_USER_WP}" -X GET "${httpProto}://${wp_WORDPRESS_ADMIN_USER}:${wp_WORDPRESS_ADMIN_PASSWORD}@${URL_WP_ORGA}/ocs/v1.php/cloud/users?search=${IDENT_KAZ}"
# nb_user_wp_orga=$(grep "<element>${IDENT_KAZ}</element>" "${TEMP_USER_WP}" | wc -l)
# if [ "${nb_user_wp_orga}" != "0" ];then
# (
# echo "${RED}"
# echo "ERREUR : ${IDENT_KAZ} existe déjà sur ${URL_WP_ORGA} - on stoppe tout - aucun utilisateur de créé"
# echo "${NC}"
# ) | tee -a "${LOG}"
#
# # ROLLBACK - on vire le user de NC
# if [ "${nb_user_nc_orga}" != "0" ];then
# (
# echo "${RED}"
# echo "ERREUR : ${IDENT_KAZ} existe déjà sur ${URL_NC} - on stoppe tout - aucun utilisateur de créé"
# echo "${NC}"
# ) | tee -a "${LOG}"
#
# # on supprime l'utilisateur sur NC.
# echo "curl -X DELETE -H 'OCS-APIRequest:true' ${httpProto}://admin:${nextcloud_NEXTCLOUD_ADMIN_PASSWORD}@${URL_NC}/ocs/v1.php/cloud/users \
# -d userid='${IDENT_KAZ}' \
# " | tee -a "${CMD_INIT}"
# fi
#
# exit 1
# fi
# TODO : créer le user et le mettre admin si nécessaire
# if [ "${service[ADMIN_ORGA]}" == "O" ]; then
# :
# else
# :
# fi
fi
############
# PAHEKO #
############
if [ "${ORGA}" != "" -a "${service[PAHEKO_ORGA]}" == "O" ]; then
URL_PAHEKO_ORGA="${ORGA}-${pahekoHost}.${domain}"
# il n'y a pas de docker spécifique paheko (je cree toujours paheko)
echo " * +paheko ${ORGA}"
CREATE_ORGA_SERVICES="${CREATE_ORGA_SERVICES} +paheko"
NB_SERVICES_DEDIES=$((NB_SERVICES_DEDIES+1))
MESSAGE_MAIL_ORGA_1="${MESSAGE_MAIL_ORGA_1}${NL}* un service de gestion adhérents/clients : ${httpProto}://${URL_PAHEKO_ORGA}"
if [ "${service[ADMIN_ORGA]}" == "O" ]; then
MESSAGE_MAIL_ORGA_1="${MESSAGE_MAIL_ORGA_1} (l'installation est à terminer en vous rendant sur le site)"
fi
fi
############
# DOKUWIKI #
############
if [ "${ORGA}" != "" -a "${service[WIKI_ORGA]}" == "O" ]; then
URL_WIKI_ORGA="${ORGA}-${dokuwikiHost}.${domain}"
# si le wiki de l'orga n'est pas up alors on le créé
nb=$(docker ps | grep "${ORGA}-${dokuwikiHost}" | wc -l)
if [ "${nb}" == "0" ];then
echo " * +wiki ${ORGA}"
CREATE_ORGA_SERVICES="${CREATE_ORGA_SERVICES} +wiki"
fi
NB_SERVICES_DEDIES=$((NB_SERVICES_DEDIES+1))
MESSAGE_MAIL_ORGA_1="${MESSAGE_MAIL_ORGA_1}${NL}* un wiki dédié pour votre documentation : ${httpProto}://${URL_WIKI_ORGA}"
# TODO : ??? à voir https://www.dokuwiki.org/devel:xmlrpc:clients
if grep -q "^${IDENT_KAZ}:" "${DOCK_VOL}/orga_${ORGA}-wikiConf/_data/users.auth.php" 2>/dev/null; then
echo "${IDENT_KAZ} existe déjà sur ${URL_WIKI_ORGA}" | tee -a "${LOG}"
else
echo "echo \"${IDENT_KAZ}:$(htpasswd -bnBC 10 "" ${PASSWORD}):${PRENOM} ${NOM}:${EMAIL_SOUHAITE}:admin,user\" >> \"${DOCK_VOL}/orga_${ORGA}-wikiConf/_data/users.auth.php\"" | tee -a "${CMD_INIT}"
fi
fi
##############
# MATTERMOST #
##############
# on ne gère pas la création du docker dédié mattermost
if [ "${ORGA}" != "" -a "${service[AGORA_ORGA]}" == "O" ]; then
echo "# ******************************************************************************" | tee -a "${CMD_INIT}"
echo "# **************************** ATTENTION ***************************************" | tee -a "${CMD_INIT}"
echo "# ******************************************************************************" | tee -a "${CMD_INIT}"
echo "# Mattermost dédié : on ne fait rien." | tee -a "${CMD_INIT}"
echo "# ******************************************************************************" | tee -a "${CMD_INIT}"
fi
if grep -q "^${IDENT_KAZ}$" "${TFILE_MM}" 2>/dev/null; then
echo "${IDENT_KAZ} existe déjà sur mattermost" | tee -a "${LOG}"
else
# on créé le compte mattermost
echo "docker exec -ti mattermostServ bin/mmctl user create --email ${EMAIL_SOUHAITE} --username ${IDENT_KAZ} --password ${PASSWORD}" | tee -a "${CMD_LOGIN}"
# et enfin on ajoute toujours le user à l'équipe KAZ et aux 2 channels publiques
echo "docker exec -ti mattermostServ bin/mmctl team users add kaz ${EMAIL_SOUHAITE}" | tee -a "${CMD_LOGIN}"
echo "docker exec -ti mattermostServ bin/mmctl channel users add kaz:une-question--un-soucis ${EMAIL_SOUHAITE}" | tee -a "${CMD_LOGIN}"
echo "docker exec -ti mattermostServ bin/mmctl channel users add kaz:cafe-du-commerce--ouvert-2424h ${EMAIL_SOUHAITE}" | tee -a "${CMD_LOGIN}"
NB_SERVICES_BASE=$((NB_SERVICES_BASE+1))
fi
if [ "${EQUIPE_AGORA}" != "" -a "${EQUIPE_AGORA}" != "kaz" ]; then
# l'équipe existe t-elle déjà ?
nb=$(docker exec mattermostServ bin/mmctl team list | grep -w "${EQUIPE_AGORA}" | wc -l)
if [ "${nb}" == "0" ];then # non, on la créé en mettant le user en admin de l'équipe
echo "docker exec -ti mattermostServ bin/mmctl team create --name ${EQUIPE_AGORA} --display_name ${EQUIPE_AGORA} --email ${EMAIL_SOUHAITE}" --private | tee -a "${CMD_INIT}"
fi
# puis ajouter le user à l'équipe
echo "docker exec -ti mattermostServ bin/mmctl team users add ${EQUIPE_AGORA} ${EMAIL_SOUHAITE}" | tee -a "${CMD_INIT}"
fi
if [ -n "${CREATE_ORGA_SERVICES}" ]; then
SEND_MSG_CREATE=true
echo "${CREATE_ORGA_CMD}" --create ${CREATE_ORGA_SERVICES} "${ORGA}" | tee -a "${CMD_ORGA}"
echo "${CREATE_ORGA_CMD}" --init ${CREATE_ORGA_SERVICES} "${ORGA}" | tee -a "${CMD_FIRST}"
ALL_ORGA="${ALL_ORGA} ${ORGA}"
fi
##########################
# Inscription newsletter #
##########################
# TODO : utiliser liste sur dev également
# on inscrit le user sur sympa, à la liste infos@${domain_sympa}
# docker exec -ti sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=https://listes.kaz.sns/sympasoap --trusted_application=SOAP_USER --trusted_application_password=SOAP_PASSWORD --proxy_vars="USER_EMAIL=contact1@kaz.sns" --service=which
if [[ "${mode}" = "dev" ]]; then
echo "# DEV, on teste l'inscription à sympa"| tee -a "${CMD_SYMPA}"
LISTMASTER=$(echo ${sympa_LISTMASTERS} | cut -d',' -f1)
echo "docker exec -ti sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${sympa_SOAP_USER} --trusted_application_password=${sympa_SOAP_PASSWORD} --proxy_vars=\"USER_EMAIL=${LISTMASTER}\" --service=add --service_parameters=\"${NL_LIST},${EMAIL_SOUHAITE}\"" | tee -a "${CMD_SYMPA}"
else
echo "# PROD, on inscrit à sympa"| tee -a "${CMD_SYMPA}"
LISTMASTER=$(echo ${sympa_LISTMASTERS} | cut -d',' -f1)
echo "docker exec -ti sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${sympa_SOAP_USER} --trusted_application_password=${sympa_SOAP_PASSWORD} --proxy_vars=\"USER_EMAIL=${LISTMASTER}\" --service=add --service_parameters=\"${NL_LIST},${EMAIL_SOUHAITE}\"" | tee -a "${CMD_SYMPA}"
echo "docker exec -ti sympaServ /usr/lib/sympa/bin/sympa_soap_client.pl --soap_url=${httpProto}://${URL_LISTE}/sympasoap --trusted_application=${sympa_SOAP_USER} --trusted_application_password=${sympa_SOAP_PASSWORD} --proxy_vars=\"USER_EMAIL=${LISTMASTER}\" --service=add --service_parameters=\"${NL_LIST},${EMAIL_SECOURS}\"" | tee -a "${CMD_SYMPA}"
fi
if [ "${service[ADMIN_ORGA]}" == "O" ]; then
MESSAGE_MAIL_ORGA_2="${MESSAGE_MAIL_ORGA_2}Comme administrateur de votre organisation, vous pouvez créer des listes de diffusion en vous rendant sur ${httpProto}://${URL_LISTE}"
fi
###################
# update paheko #
###################
# TODO : problème si 2 comptes partagent le même email souhaité (cela ne devrait pas arriver)
curl -s "https://${paheko_API_USER}:${paheko_API_PASSWORD}@kaz-paheko.kaz.bzh/api/sql" -d "SELECT numero,nom,quota_disque from users WHERE email='${EMAIL_SOUHAITE}'" | jq '.results[] | .numero,.nom,.quota_disque ' | tr \\n ',' | sed 's/,$/,Aucune\n/' >> "${TEMP_PAHEKO}"
####################
# Inscription MAIL #
####################
if [ "${NB_SERVICES_DEDIES}" != "0" ];then
MESSAGE_MAIL_ORGA_1="${NL} dont ${NB_SERVICES_DEDIES} service(s) dédié(s) pour votre organisation:${NL} ${MESSAGE_MAIL_ORGA_1}"
fi
if [ -z "${SEND_MSG_CREATE}" ]; then
# rien de créé => pas de message
continue
fi
#si admin alors msg pour indiquer qu'il faut nous renvoyer ce qu'il souhaite comme service.
if [ "${service[ADMIN_ORGA]}" == "O" ]; then
MESSAGE_MAIL_ORGA_3="${MESSAGE_MAIL_ORGA_3}En tant qu'association/famille/société. Vous avez la possibilité d'ouvrir, quand vous le voulez, des services kaz, il vous suffit de nous le demander.
Pourquoi n'ouvrons-nous pas tous les services tout de suite ? parce que nous aimons la sobriété et que nous préservons notre espace disque ;)
A quoi sert d'avoir un site web si on ne l'utilise pas, n'est-ce pas ?
Par retour de mail, dites-nous de quoi vous avez besoin tout de suite entre:
* une comptabilité : un service de gestion adhérents/clients
* un site web de type WordPress
* un cloud : bureau virtuel pour stocker des fichiers/calendriers/contacts et partager avec vos connaissances
Une fois que vous aurez répondu à ce mail, votre demande sera traitée manuellement.
"
fi
# on envoie le mail de bienvenue
MAIL_KAZ="Bonjour,
Bienvenue chez KAZ!
Vous disposez de $((${NB_SERVICES_BASE} + ${NB_SERVICES_DEDIES})) services kaz avec authentification :
* une messagerie classique : ${httpProto}://${URL_WEBMAIL}
* une messagerie instantanée pour discuter au sein d'équipes : ${httpProto}://${URL_AGORA}
Votre email et identifiant pour tous ces services : ${EMAIL_SOUHAITE}
Le mot de passe : ${PASSWORD}
Pour changer votre mot de passe de messagerie, c'est ici: ${httpProto}://${URL_MDP}
Si vous avez perdu votre mot de passe, c'est ici: ${httpProto}://${URL_MDP}/?action=sendtoken
Vous pouvez accéder à votre messagerie :
* soit depuis votre webmail : ${httpProto}://${URL_WEBMAIL}
* soit depuis votre bureau virtuel : ${httpProto}://${URL_NC}
* soit depuis un client de messagerie comme thunderbird
${MESSAGE_MAIL_ORGA_3}
Vous avez quelques docs intéressantes sur le wiki de kaz:
* Migrer son site internet wordpress vers kaz
https://wiki.kaz.bzh/wordpress/start#migrer_son_site_wordpress_vers_kaz
* Migrer sa messagerie vers kaz
https://wiki.kaz.bzh/messagerie/gmail/start
* Démarrer simplement avec son cloud
https://wiki.kaz.bzh/nextcloud/start
Votre quota est de ${QUOTA}GB. Si vous souhaitez plus de place pour vos fichiers ou la messagerie, faites-nous signe !
Pour accéder à la messagerie instantanée et communiquer avec les membres de votre équipe ou ceux de kaz : ${httpProto}://${URL_AGORA}/login
${MESSAGE_MAIL_ORGA_2}
Enfin, vous disposez de tous les autres services KAZ où l'authentification n'est pas nécessaire : ${httpProto}://${URL_SITE}
En cas de soucis, n'hésitez pas à poser vos questions sur le canal 'Une question ? un soucis' de l'agora dispo ici : ${httpProto}://${URL_AGORA}
Si vous avez besoin d'accompagnement pour votre site, votre cloud, votre compta, votre migration de messagerie,... nous proposons des formations mensuelles gratuites. Si vous souhaitez être accompagné par un professionnel, nous pouvons vous donner une liste de pros, référencés par KAZ.
À bientôt ;)
La collégiale de KAZ. "
echo "docker exec -i mailServ mailx -a 'Content-Type: text/plain; charset=\"UTF-8\"' -r contact@kaz.bzh -s \"KAZ: confirmation d'inscription\" ${EMAIL_SOUHAITE} ${EMAIL_SECOURS} << EOF
${MAIL_KAZ}
EOF" | tee -a "${CMD_MSG}"
# on envoie le mail de confirmation d'inscription à contact
MAIL_KAZ="*****POST AUTOMATIQUE******
Hello,
${NOM} ${PRENOM} vient d'être inscrit avec l'email ${EMAIL_SOUHAITE}
quota : ${QUOTA}GB
NC_BASE : ${service[NC_BASE]}
groupe NC base : ${GROUPE_NC_BASE}
équipe agora base : ${EQUIPE_AGORA}
email de secours : ${EMAIL_SECOURS}
ORGA : ${ORGA}
ADMIN_ORGA : ${service[ADMIN_ORGA]}
NC_ORGA : ${service[NC_ORGA]}
PAHEKO_ORGA : ${service[PAHEKO_ORGA]}
WP_ORGA : ${service[WP_ORGA]}
AGORA_ORGA : ${service[AGORA_ORGA]}
WIKI_ORGA : ${service[WIKI_ORGA]}
bisou!"
echo "docker exec -i mailServ mailx -a 'Content-Type: text/plain; charset=\"UTF-8\"' -r contact@kaz.bzh -s \"KAZ: confirmation d'inscription\" ${EMAIL_CONTACT} << EOF
${MAIL_KAZ}
EOF" | tee -a "${CMD_MSG}"
echo " # on envoie la confirmation d'inscription sur l'agora " | tee -a "${CMD_MSG}"
echo "docker exec -ti mattermostServ bin/mmctl post create kaz:Creation-Comptes --message \"${MAIL_KAZ}\"" | tee -a "${CMD_MSG}"
# fin des inscriptions
done <<< "${ALL_LINES}"
if [[ -n "${ALL_ORGA}" ]]; then
echo "sleep 2" | tee -a "${CMD_PROXY}"
echo "${KAZ_BIN_DIR}/container.sh start ${ALL_ORGA}" | tee -a "${CMD_PROXY}"
for item in "${availableProxyComposes[@]}"; do
echo "cd \"${KAZ_COMP_DIR}/${item}/\"; ./proxy-gen.sh; docker-compose up -d; ./reload.sh " | tee -a "${CMD_PROXY}"
done
fi
###########################
# Lancement des commandes #
###########################
if [ "${SIMULATION}" == "NO" ]; then
echo "on exécute"
"${CMD_LOGIN}"
# on attend qques secondes que le mail soit bien créé avant de continuer (prob de lecture de la BAL : à investiguer)
# je rallonge à 20s car je vois que le docker sympa ne connait pas toujours l'email kaz créé
echo "on attend 20s pour que la création des emails soit certaine"
sleep 20
"${CMD_SYMPA}"
"${CMD_ORGA}"
"${CMD_PROXY}"
"${CMD_FIRST}"
"${CMD_INIT}"
"${CMD_PAHEKO}"
"${CMD_MSG}"
else
echo "Aucune commande n'a été lancée : Possibilité de le faire à la main. cf ${KAZ_ROOT}/tmp/${RACINE}_cmds_to_run-*.sh"
fi
# END

7
bin/cron-cloud.sh Executable file
View File

@ -0,0 +1,7 @@
#!/bin/bash
for cloud in $(docker ps | grep -i nextcloudServ |awk '{print $12}')
do
docker exec -u www-data $cloud php cron.php
done

209
bin/dns.sh Executable file
View File

@ -0,0 +1,209 @@
#!/bin/bash
# list/ajout/supprime/ un sous-domaine
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
. "${DOCKERS_ENV}"
cd "${KAZ_ROOT}"
export PRG="$0"
export IP="127.0.0.1"
export ETC_HOSTS="/etc/hosts"
# no more export in .env
export $(set | grep "domain=")
declare -a forbidenName
forbidenName=(${calcHost} calc ${cloudHost} bureau ${dateHost} date ${dokuwikiHost} dokuwiki ${fileHost} file ${ldapHost} ${pahekoHost} ${gitHost} ${gravHost} ${matterHost} ${officeHost} collabora ${padHost} ${sympaHost} listes ${webmailHost} ${wordpressHost} www ${vigiloHost} form)
export FORCE="NO"
export CMD=""
export SIMU=""
usage(){
echo "Usage: ${PRG} list [sub-domain...]"
echo " ${PRG} [-n] [-f] {add/del} sub-domain..."
echo " -h help"
echo " -n simulation"
echo " -f force protected domain"
exit 1
}
for ARG in $@
do
case "${ARG}" in
'-h' | '-help' )
usage
;;
'-f' )
shift
export FORCE="YES"
;;
'-n' )
shift
export SIMU="echo"
;;
'list'|'add'|'del' )
shift
CMD="${ARG}"
break
;;
* )
usage
;;
esac
done
if [ -z "${CMD}" ]; then
usage
fi
. "${KAZ_KEY_DIR}/env-gandi"
if [[ -z "${GANDI_KEY}" ]] ; then
echo
echo "no GANDI_KEY set in ${KAZ_KEY_DIR}/env-gandi"
usage
fi
waitNet () {
if [[ "${domain}" = "kaz.local" ]]; then
return
fi
### wait when error code 503
if [[ $(curl -H "authorization: Apikey ${GANDI_KEY}" --connect-timeout 2 -s -D - "${GANDI_API}" -o /dev/null 2>/dev/null | head -n1) != *200* ]]; then
echo "DNS not available. Please wait..."
while [[ $(curl -H "authorization: Apikey ${GANDI_KEY}" --connect-timeout 2 -s -D - "${GANDI_API}" -o /dev/null 2>/dev/null | head -n1) != *200* ]]
do
sleep 5
done
exit
fi
}
list(){
if [[ "${domain}" = "kaz.local" ]]; then
grep --perl-regex "^${IP}\s.*${domain}" "${ETC_HOSTS}" 2> /dev/null | sed -e "s|^${IP}\s*\([0-9a-z.-]${domain}\)$|\1|g"
return
fi
waitNet
trap 'rm -f "${TMPFILE}"' EXIT
TMPFILE="$(mktemp)" || exit 1
if [[ -n "${SIMU}" ]] ; then
${SIMU} curl -X GET "${GANDI_API}/records" -H "authorization: Apikey ${GANDI_KEY}"
else
curl -X GET "${GANDI_API}/records" -H "authorization: Apikey ${GANDI_KEY}" 2>/dev/null | \
sed "s/,{/\n/g" | \
sed 's/.*rrset_name":"\([^"]*\)".*rrset_values":\["\([^"]*\)".*/\1:\2/g'| \
grep -v '^[_@]'| \
grep -e ":${domain}\.*$" -e ":prod[0-9]*$" > ${TMPFILE}
fi
if [ $# -lt 1 ]; then
cat ${TMPFILE}
else
for ARG in $@
do
cat ${TMPFILE} | grep "${ARG}.*:"
done
fi
}
saveDns () {
for ARG in $@ ; do
if [[ "${ARG}" =~ .local$ ]] ; then
echo "${PRG}: old fasion style (remove .local at the end)"
usage;
fi
if [[ "${ARG}" =~ .bzh$ ]] ; then
echo "${PRG}: old fasion style (remove .bzh at the end)"
usage;
fi
if [[ "${ARG}" =~ .dev$ ]] ; then
echo "${PRG}: old fasion style (remove .dev at the end)"
usage;
fi
done
if [[ "${domain}" = "kaz.local" ]]; then
return
fi
waitNet
${SIMU} curl -X POST "${GANDI_API}/snapshots" -H "authorization: Apikey ${GANDI_KEY}" 2>/dev/null
}
badName(){
[[ -z "$1" ]] && return 0;
for item in "${forbidenName[@]}"; do
[[ "${item}" == "$1" ]] && [[ "${FORCE}" == "NO" ]] && return 0
done
return 1
}
add(){
if [ $# -lt 1 ]; then
exit
fi
saveDns $@
declare -a ADDED
for ARG in $@
do
if badName "${ARG}" ; then
echo "can't manage '${ARG}'. Use -f option"
continue
fi
case "${domain}" in
kaz.local )
if grep -q --perl-regex "^${IP}.*[ \t]${ARG}.${domain}" "${ETC_HOSTS}" 2> /dev/null ; then
break
fi
if grep -q --perl-regex "^${IP}[ \t]" "${ETC_HOSTS}" 2> /dev/null ; then
${SIMU} sudo sed -i -e "0,/^${IP}[ \t]/s/^\(${IP}[ \t]\)/\1${ARG}.${domain} /g" "${ETC_HOSTS}"
else
${SIMU} sudo sed -i -e "$ a ${IP}\t${ARG}.${domain}" "${ETC_HOSTS}" 2> /dev/null
fi
;;
*)
${SIMU} curl -X POST "${GANDI_API}/records" -H "authorization: Apikey ${GANDI_KEY}" -H 'content-type: application/json' -d '{"rrset_type":"CNAME", "rrset_name":"'${ARG}'", "rrset_values":["'${site}'"]}'
echo
;;
esac
ADDED+=("${ARG}")
done
echo "Domains added to ${domain}: ${ADDED[@]}"
}
del(){
if [ $# -lt 1 ]; then
exit
fi
saveDns $@
declare -a REMOVED
for ARG in $@
do
if badName "${ARG}" ; then
echo "can't manage '${ARG}'. Use -f option"
continue
fi
case "${domain}" in
kaz.local )
if !grep -q --perl-regex "^${IP}.*[ \t]${ARG}.${domain}" "${ETC_HOSTS}" 2> /dev/null ; then
break
fi
${SIMU} sudo sed -i -e "/^${IP}[ \t]*${ARG}.${domain}[ \t]*$/d" \
-e "s|^\(${IP}.*\)[ \t]${ARG}.${domain}|\1|g" "${ETC_HOSTS}"
;;
* )
${SIMU} curl -X DELETE "${GANDI_API}/records/${ARG}" -H "authorization: Apikey ${GANDI_KEY}"
echo
;;
esac
REMOVED+=("${ARG}")
done
echo "Domains removed from ${domain}: ${REMOVED[@]}"
}
#echo "CMD: ${CMD} $*"
${CMD} $*

66
bin/envoiMails.sh Executable file
View File

@ -0,0 +1,66 @@
#!/bin/bash
#kan: 09/09/2022
#koi: envoyer un mail à une liste (sans utiliser sympa sur listes.kaz.bzh)
#ki : fab
#on récupère toutes les variables et mdp
# on prend comme source des repertoire le dossier du dessus ( /kaz dans notre cas )
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
SIMULATION=NO
CMD="/tmp/envoiMail_cmds_to_run.sh"
echo "#!/bin/bash" > ${CMD} && chmod +x ${CMD}
#################################################################################################################
MAIL_KAZ="
KAZ, association morbihannaise, propose de \"dégoogliser\" linternet avec des solutions et services numériques libres alternatifs à ceux des GAFAM. Elle invite les habitants et les élus de Vannes et de sa région à une réunion publique dinformation et déchange :
Le jeudi 22 septembre 2022 à 18 heures
à la Maison des associations, 31 rue Guillaume Le Bartz à Vannes.
Cette invitation est destinée à toute personne sensible aux enjeux du numérique, aux risques pour la démocratie et les libertés, à sa participation au dérèglement climatique et à lépuisement des ressources de notre planète.
Nous dirons qui nous sommes, quelles sont nos valeurs et quels sont concrètement les solutions et services que nous proposons, leurs conditions daccès et laccompagnement utile pour leur prise en main.
Les premières pierres de KAZ ont été posées voilà bientôt deux ans, en pleine pandémie de la COVID, par sept citoyens qui ont voulu répondre à lappel de Framasoft de dégoogliser linternet. Ne plus se lamenter sur la puissance des GAFAM, mais proposer des solutions et des services numériques alternatifs à ceux de Google, Amazon, Facebook, Apple et Microsoft en sappuyant sur les fondamentaux Sobre, Libre, Éthique et Local.
A ce jour, près de 200 particuliers ont ouvert une adresse @kaz.bz, plus de 80 organisations ont souscrit au bouquet de services de KAZ et près de 800 collaborateurs dorganisations utilisatrices des services de KAZ participent peu ou prou au réseau de KAZ.
Beaucoup de services sont gratuits et accessibles sur le site https://kaz.bzh. Dautres demandent louverture dun compte moyennant une petite participation financière de 10€ par an pour les particuliers et de 30€ par an pour les organisations. Ceci est permis par le bénévolat des membres de la collégiale qui dirigent lassociation et administrent les serveurs et les services.
A ce stade, de nombreuses questions se posent à KAZ. Quelle ambition de couverture de ses services auprès des particuliers et des organisations sur son territoire, le Morbihan ? Quel accompagnement dans la prise en main des outils ? Quels nouveaux services seraient utiles ?
Nous serions heureux de votre participation à notre réunion publique du 22 septembre d'une durée totale de 2h :
* Présentation valeurs / contexte (15mn)
* Présentation outils (45mn)
* Questions / Réponses (1h)
En restant à votre disposition,
La Collégiale de KAZ
"
#################################################################################################################
FIC_MAILS="/tmp/fic_mails"
while read EMAIL_CIBLE
do
echo "docker exec -i mailServ mailx -a 'Content-Type: text/plain; charset=\"UTF-8\"' -r contact@kaz.bzh -s \"Invitation rencontre KAZ jeudi 22 septembre à 18h00 à la Maison des assos à Vannes\" ${EMAIL_CIBLE} << EOF
${MAIL_KAZ}
EOF" | tee -a ${CMD}
echo "sleep 2" | tee -a ${CMD}
done < ${FIC_MAILS}
# des commandes à lancer ?
if [ "${SIMULATION}" == "NO" ];then
echo "on exécute"
${CMD}
else
echo "Aucune commande n'a été lancée: Possibilité de le faire à la main. cf ${CMD}"
fi

240
bin/foreign-domain.sh Executable file
View File

@ -0,0 +1,240 @@
#!/bin/bash
# list/ajout/supprime/ les domaines extérieurs à kaz.bzh
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
export PRG="$0"
cd $(dirname $0)
. "${DOCKERS_ENV}"
LETS_DIR="/etc/letsencrypt/$([ "${mode}" == "local" ] && echo "local" || echo "live")"
declare -a availableComposes availableOrga
availableComposes=(${pahekoHost} ${cloudHost} ${dokuwikiHost} ${wordpressHost} ${matterHost} ${castopodHost})
availableOrga=($(sed -e "s/\(.*\)[ \t]*#.*$/\1/" -e "s/^[ \t]*\(.*\)-orga$/\1/" -e "/^$/d" "${KAZ_CONF_DIR}/container-orga.list"))
availableProxyComposes=($(getList "${KAZ_CONF_DIR}/container-proxy.list"))
# no more export in .env
export $(set | grep "domain=")
export CMD=""
export SIMU=""
export CHANGE=""
usage(){
echo "Usage: ${PRG} list [friend-domain...]"
echo " ${PRG} [-n] add orga [${pahekoHost} ${cloudHost} ${dokuwikiHost} ${wordpressHost} ${matterHost} ${castopodHost}] [friend-domain...] "
echo " ${PRG} [-n] del [friend-domain...]"
echo " ${PRG} -l"
echo " -l short list"
echo " -renewAll"
echo " -h help"
echo " -n simulation"
exit 1
}
export CERT_CFG="${KAZ_CONF_PROXY_DIR}/foreign-certificate"
createCert () {
(
fileName="${LETS_DIR}/$1-key.pem"
#[ -f "${fileName}" ] || return
# if [ -f "${fileName}" ]; then
# fileTime=$(stat --format='%Y' "${fileName}")
# current_time=$(date +%s)
# if (( "${fileTime}" > ( "${current_time}" - ( 60 * 60 * 24 * 89 ) ) )); then
# exit
# fi
# fi
printKazMsg "create certificat for $1"
${SIMU} docker exec -i proxyServ bash -c "/opt/certbot/bin/certbot certonly -n --nginx -d $1"
)
}
for ARG in $@; do
case "${ARG}" in
'-h' | '-help' )
usage
;;
'-n' )
shift
export SIMU="echo"
;;
'-renewAll')
for i in $("${KAZ_BIN_DIR}/foreign-domain.sh" -l); do
echo "$i"
createCert "$i" |grep failed
done
exit
;;
'-l')
for compose in ${availableComposes[@]} ; do
grep "server_name" "${KAZ_CONF_PROXY_DIR}/${compose}_kaz_name" | sed -e "s/[ \t]*\([^#]*\)#.*/\1/g" -e "/^$/d" -e "s/.*server_name[ \t]\([^ ;]*\).*/\1/"
done
exit
;;
'list'|'add'|'del' )
shift
CMD="${ARG}"
break
;;
* )
usage
;;
esac
done
if [ -z "${CMD}" ]; then
echo "Commande missing"
usage
fi
########################################
badDomaine () {
[[ -z "$1" ]] && return 0;
[[ ! "$1" =~ ^[-.a-zA-Z0-9]*$ ]] && return 0;
return 1
}
badOrga () {
[[ -z "$1" ]] && return 0;
[[ ! " ${availableOrga[*]} " =~ " $1 " ]] && return 0
return 1
}
badCompose () {
[[ -z "$1" ]] && return 0;
[[ ! " ${availableComposes[*]} " =~ " $1 " ]] && return 0
return 1
}
########################################
listServ () {
for compose in ${availableComposes[@]} ; do
sed -e "s/[ \t]*\([^#]*\)#.*/\1/g" -e "/^$/d" -e "s/.*server_name[ \t]\([^ ;]*\).*/\1 : ${compose}/" "${KAZ_CONF_PROXY_DIR}/${compose}_kaz_name"
done
}
listOrgaServ () {
for compose in ${availableComposes[@]} ; do
sed -e "s/[ \t]*\([^#]*\)#.*/\1/g" -e "/^$/d" -e "s/\([^ ]*\)[ \t]*\([^ \t;]*\).*/\1 => \2 : ${compose}/" "${KAZ_CONF_PROXY_DIR}/${compose}_kaz_map"
done
}
########################################
list () {
previousOrga=$(listOrgaServ)
previousServ=$(listServ)
if [ $# -lt 1 ]; then
[ -n "${previousOrga}" ] && echo "${previousOrga}"
[ -n "${previousServ}" ] && echo "${previousServ}"
return
fi
for ARG in $@
do
orga=$(echo "${previousOrga}" | grep "${ARG}.* =>")
serv=$(echo "${previousServ}" | grep "${ARG}.* =>")
[ -n "${orga}" ] && echo "${orga}"
[ -n "${serv}" ] && echo "${serv}"
done
}
########################################
add () {
# $1 : orga
# $2 : service
# $3 : friend-domain
[ $# -lt 3 ] && usage
badOrga $1 && echo "bad orga: ${RED}$1${NC} not in ${GREEN}${availableOrga[@]}${NC}" && usage
badCompose $2 && echo "bad compose: ${RED}$2${NC} not in ${GREEN}${availableComposes[@]}${NC}" && usage
ORGA=$1
COMPOSE=$2
shift; shift
CLOUD_SERVNAME="${ORGA}-${nextcloudServName}"
CLOUD_CONFIG="${DOCK_VOL}/orga_${ORGA}-cloudConfig/_data/config.php"
# XXX check compose exist in orga ?
# /kaz/bin/kazList.sh service enable ${ORGA}
if [ "${COMPOSE}" = "${cloudHost}" ]; then
if ! [[ "$(docker ps -f name=${CLOUD_SERVNAME} | grep -w ${CLOUD_SERVNAME})" ]]; then
printKazError "${CLOUD_SERVNAME} not running... abort"
exit
fi
fi
for FRIEND in $@; do
badDomaine "${FRIEND}" && echo "bad domaine: ${RED}${FRIEND}${NC}" && usage
done
for FRIEND in $@; do
createCert "${FRIEND}"
if [ "${COMPOSE}" = "${cloudHost}" ]; then
IDX=$(awk 'BEGIN {flag=0; cpt=0} /trusted_domains/ {flag=1} /)/ {if (flag) {print cpt+1; exit 0}} / => / {if (flag && cpt<$1) cpt=$1}' "${CLOUD_CONFIG}")
${SIMU} docker exec -ti -u 33 "${CLOUD_SERVNAME}" /var/www/html/occ config:system:set trusted_domains "${IDX}" --value="${FRIEND}"
fi
previousOrga=$(listOrgaServ | grep "${FRIEND}")
[[ " ${previousOrga}" =~ " ${FRIEND} => ${ORGA} : ${COMPOSE}" ]] && echo " - already done" && continue
[[ " ${previousOrga}" =~ " ${FRIEND} " ]] && echo " - ${YELLOW}${BOLD}$(echo "${previousOrga}" | grep -e "${FRIEND}")${NC} must be deleted before" && return
if [[ -n "${SIMU}" ]] ; then
echo "${FRIEND} ${ORGA}; => ${KAZ_CONF_PROXY_DIR}/${COMPOSE}_kaz_map"
cat <<EOF
=> ${KAZ_CONF_PROXY_DIR}/${COMPOSE}_kaz_name
server_name ${FRIEND};
EOF
else
echo "${FRIEND} ${ORGA};" >> "${KAZ_CONF_PROXY_DIR}/${COMPOSE}_kaz_map"
cat >> "${KAZ_CONF_PROXY_DIR}/${COMPOSE}_kaz_name" <<EOF
server_name ${FRIEND};
EOF
fi
echo "${PRG}: ${FRIEND} added"
CHANGE="add"
done
#(cd "${KAZ_COMP_DIR}/${ORGA}-orga"; docker-compose restart)
}
########################################
del () {
[ $# -lt 1 ] && usage
for FRIEND in $@; do
badDomaine "${FRIEND}" && echo "bad domaine: ${RED}${FRIEND}${NC}" && usage
previous=$(listOrgaServ | grep -e "${FRIEND}")
[[ ! "${previous}" =~ ^${FRIEND} ]] && echo "${FRIEND} not found in ${previous}" && continue
# XXX if done OK
for COMPOSE in ${availableComposes[@]} ; do
if grep -q -e "^[ \t]*${FRIEND}[ \t]" "${KAZ_CONF_PROXY_DIR}/${COMPOSE}_kaz_map" ; then
if [ "${COMPOSE}" = "${cloudHost}" ]; then
ORGA="$(grep "${FRIEND}" "${KAZ_CONF_PROXY_DIR}/${COMPOSE}_kaz_map" | sed "s/^${FRIEND}\s*\([^;]*\);/\1/")"
CLOUD_CONFIG="${DOCK_VOL}/orga_${ORGA}-cloudConfig/_data/config.php"
${SIMU} sed -e "/\d*\s*=>\s*'${FRIEND}'/d" -i "${CLOUD_CONFIG}"
fi
${SIMU} sed -e "/^[ \t]*${FRIEND}[ \t]/d" -i "${KAZ_CONF_PROXY_DIR}/${COMPOSE}_kaz_map"
fi
if grep -q -e "^[ \t]*server_name ${FRIEND};" "${KAZ_CONF_PROXY_DIR}/${COMPOSE}_kaz_name" ; then
${SIMU} sed -i "${KAZ_CONF_PROXY_DIR}/${COMPOSE}_kaz_name" \
-e "/^[ \t]*server_name ${FRIEND};/d"
fi
done
echo "${PRG}: ${FRIEND} deleted"
CHANGE="del"
done
}
########################################
${CMD} $@
if [ -n "${CHANGE}" ] ; then
echo "Reload proxy conf"
for item in "${availableProxyComposes[@]}"; do
${SIMU} ${KAZ_COMP_DIR}/${item}/proxy-gen.sh
${SIMU} "${KAZ_COMP_DIR}/proxy/reload.sh"
done
fi
########################################

648
bin/gestContainers.sh Executable file
View File

@ -0,0 +1,648 @@
#!/bin/bash
# Script de manipulation des containers en masse
# init /versions / restart ...
#
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
PRG=$(basename $0)
#TODO: ce tab doit être construit à partir de la liste des machines dispos et pas en dur
tab_sites_destinations_possibles=("kazoulet" "prod2" "prod1")
#GLOBAL VARS
NAS_VOL="/mnt/disk-nas1/docker/volumes/"
availableOrga=($(getList "${KAZ_CONF_DIR}/container-orga.list"))
AVAILABLE_ORGAS=${availableOrga[*]//-orga/}
availableContainersCommuns=( $(getList "${KAZ_CONF_DIR}/container-withMail.list") $(getList "${KAZ_CONF_DIR}/container-withoutMail.list"))
OPERATE_ON_MAIN= # par defaut NON on ne traite que des orgas
OPERATE_ON_NAS_ORGA="OUI" # par defaut oui, on va aussi sur les orgas du NAS
OPERATE_LOCAL_ORGA="OUI" # par defaut oui
TEMPO_ACTION_STOP=2 # Lors de redémarrage avec tempo, on attend après le stop
TEMPO_ACTION_START=60 # Lors de redémarrage avec tempo, avant de reload le proxy
CONTAINERS_TYPES=
defaultContainersTypes="cloud agora wp wiki office paheko castopod" # les containers gérés par ce script.
declare -A DockerServNames # le nom des containers correspondant
DockerServNames=( [cloud]="${nextcloudServName}" [agora]="${mattermostServName}" [wiki]="${dokuwikiServName}" [wp]="${wordpressServName}" [office]="${officeServName}" [paheko]="${pahekoServName}" [castopod]="${castopodServName}" )
declare -A FilterLsVolume # Pour trouver quel volume appartient à quel container
FilterLsVolume=( [cloud]="cloudMain" [agora]="matterConfig" [wiki]="wikiConf" [wp]="wordpress" [castopod]="castopodMedia" )
declare -A composeDirs # Le nom du repertoire compose pour le commun
composeDirs=( [cloud]="cloud" [agora]="mattermost" [wiki]="dokuwiki" [office]="collabora" [paheko]="paheko" [castopod]="castopod" )
declare -A serviceNames # Le nom du du service dans le dockerfile d'orga
serviceNames=( [cloud]="cloud" [agora]="agora" [wiki]="dokuwiki" [wp]="wordpress" [office]="collabora" [castopod]="castopod")
declare -A subScripts
subScripts=( [cloud]="manageCloud.sh" [agora]="manageAgora.sh" [wiki]="manageWiki.sh" [wp]="manageWp.sh" [castopod]="manageCastopod.sh" )
declare -A OrgasOnNAS
declare -A OrgasLocales
declare -A NbOrgas
declare -A RunningOrgas
declare -A Posts
QUIET="1" # redirection des echo
OCCCOMANDS=()
MMCTLCOMANDS=()
EXECCOMANDS=()
# CLOUD
APPLIS_PAR_DEFAUT="tasks calendar contacts bookmarks mail richdocuments external drawio snappymail ransomware_protection" #rainloop richdocumentscode
usage() {
echo "${PRG} [OPTION] [CONTAINERS_TYPES] [COMMANDES] [ORGAS]
Ce script regroupe l'ensemble des opérations que l'on souhaite automatiser sur plusieurs containers
Par defaut, sur les orgas, mais on peut aussi ajouter les communs
OPTIONS
-h|--help Cette aide :-)
-n|--simu SIMULATION
-q|--quiet On ne parle pas (utile avec le -n pour avoir que les commandes)
-m|--main Traite aussi le container commun (cloud commun / agora commun / wiki commun)
-M Ne traite que le container commun, et pas les orgas
--nas Ne traite QUE les orgas sur le NAS
--local Ne traite pas les orgas sur le NAS
-v|--version Donne la version des containers et signale les MàJ
-l|--list Liste des containers (up / down, local ou nas) de cette machine
CONTAINERS_TYPES
-cloud Pour agir sur les clouds
-agora Pour agir sur les agoras
-wp Les wp
-wiki Les wiki
-office Les collabora
-paheko Le paheko
-castopod Les castopod
COMMANDES (on peut en mettre plusieurs dans l'ordre souhaité)
-I|--install L'initialisation du container
-t Redémarre avec tempo (docker-compose down puis sleep ${TEMPO_ACTION_STOP} puis up puis sleep ${TEMPO_ACTION_START})
-r Redémarre sans tempo (docker restart)
-exec \"command\" Envoie une commande docker exec
--optim Lance la procédure Nextcloud pour optimiser les performances ** **
-occ \"command\" Envoie une commande via occ ** **
-u Mets à jour les applis ** SPECIFIQUES **
-i Install des applis ** CLOUD **
-a \"app1 app2 ...\" Choix des appli à installer ou mettre à jour (entre guillemets) ** **
-U|--upgrade Upgrade des clouds ** **
-mmctl \"command\" Envoie une commande via mmctl ** SPECIFIQUES **
-p|--post \"team\" \"message\" Poste un message dans une team agora ** AGORA **
ORGAS
[orga1 orga2 ... ] on peut filtrer parmi : ${AVAILABLE_ORGAS}
Exemples :
${PRG} -office -m -r restart de tous les collaboras (libére RAM)
${PRG} -cloud -u -r -q -n Affiche toutes les commandes (-n -q ) pour mettre à jour toutes les applis des clouds + restart (-u -r)
${PRG} -p \"monorga:town-square\" \"Hello\" monorga # envoie Hello sur le centreville de l'orga monorga sur son mattermost dédié
"
}
####################################################
################ fonctions clefs ###################
####################################################
_populate_lists(){
# récupère les listes d'orga à traiter
# on rempli les tableaux OrgasOnNAS / OrgasLocales / NbOrgas ... par type de container
if [ -z "${CONTAINERS_TYPES}" ]; then
# wow, on traite tout le monde d'un coup...
CONTAINERS_TYPES="$defaultContainersTypes"
fi
for TYPE in ${CONTAINERS_TYPES}; do
if [ -n "${FilterLsVolume[$TYPE]}" ] ; then # on regarde dans les volumes
[ -n "$OPERATE_ON_NAS_ORGA" ] && OrgasOnNAS["$TYPE"]=$( _getListOrgas ${NAS_VOL} ${FilterLsVolume[$TYPE]} )
[ -n "$OPERATE_LOCAL_ORGA" ] && OrgasLocales["$TYPE"]=$( _getListOrgas ${DOCK_VOL} ${FilterLsVolume[$TYPE]} "SANSLN")
else # un docker ps s'il n'y a pas de volumes
[ -n "$OPERATE_LOCAL_ORGA" ] && OrgasLocales["$TYPE"]=$(docker ps --format '{{.Names}}' | grep ${DockerServNames[$TYPE]} | sed -e "s/-*${DockerServNames[$TYPE]}//")
fi
NbOrgas["$TYPE"]=$(($(echo ${OrgasOnNAS["$TYPE"]} | wc -w) + $(echo ${OrgasLocales["$TYPE"]} | wc -w)))
RunningOrgas["$TYPE"]=$(docker ps --format '{{.Names}}' | grep ${DockerServNames[$TYPE]} | sed -e "s/-*${DockerServNames[$TYPE]}//")
done
}
_getListOrgas(){
# retrouve les orgas à partir des volume présents
# $1 where to lookup
# $2 filter
# $3 removeSymbolicLinks
[ ! -d $1 ] || [ -z "$2" ] && return 1 # si le repertoire n'existe pas on skip
LIST=$(ls "${1}" | grep -i orga | grep -i "$2" | sed -e "s/-${2}$//g" | sed -e 's/^orga_//')
[ -n "$3" ] && LIST=$(ls -F "${1}" | grep '/' | grep -i orga | grep -i "$2" | sed -e "s/-${2}\/$//g" | sed -e 's/^orga_//')
LIST=$(comm -12 <(printf '%s\n' ${LIST} | sort) <(printf '%s\n' ${AVAILABLE_ORGAS} | sort))
echo "$LIST"
}
_executeFunctionForAll(){
# Parcours des container et lancement des commandes
# Les commandes ont en derniers paramètres le type et l'orga et une string parmi KAZ/ORGANAS/ORGALOCAL pour savoir sur quoi on opère
# $1 function
# $2 nom de la commande
# $3 quel types de containers
# $4 params : quels paramètres à passer à la commande (les clefs sont #ORGA# #DOCKERSERVNAME# #SURNAS# #ISMAIN# #TYPE# #COMPOSEDIR# )
for TYPE in ${3}; do
if [ -n "$OPERATE_ON_MAIN" ]; then
if [[ -n "${composeDirs[$TYPE]}" && "${availableContainersCommuns[*]}" =~ "${composeDirs[$TYPE]}" ]]; then # pas de cloud / agora / wp / wiki sur cette instance
Dockername=${DockerServNames[$TYPE]}
PARAMS=$(echo $4 | sed -e "s/#ORGA#//g;s/#DOCKERSERVNAME#/$Dockername/g;s/#ISMAIN#/OUI/g;s/#SURNAS#/NON/g;s/#TYPE#/$TYPE/g;s%#COMPOSEDIR#%${KAZ_COMP_DIR}/${composeDirs[$TYPE]}%g" )
echo "-------- $2 $TYPE COMMUN ----------------------------" >& $QUIET
eval "$1" $PARAMS
fi
fi
if [[ ${NbOrgas[$TYPE]} -gt 0 ]]; then
echo "-------- $2 des $TYPE des ORGAS ----------------------------" >& $QUIET
COMPTEUR=1
if [ -n "$OPERATE_LOCAL_ORGA" ]; then
for ORGA in ${OrgasLocales[$TYPE]}; do
Dockername=${ORGA}-${DockerServNames[$TYPE]}
PARAMS=$(echo $4 | sed -e "s/#ORGA#/${ORGA}/g;s/#DOCKERSERVNAME#/$Dockername/g;s/#ISMAIN#/NON/g;s/#SURNAS#/NON/g;s/#TYPE#/$TYPE/g;s%#COMPOSEDIR#%${KAZ_COMP_DIR}/${ORGA}-orga%g" )
echo "${RED} ${ORGA}-orga ${NC}($COMPTEUR/${NbOrgas[$TYPE]})" >& $QUIET
eval "$1" $PARAMS
COMPTEUR=$((COMPTEUR + 1))
done
fi
if [ -n "$OPERATE_ON_NAS_ORGA" ]; then
for ORGA in ${OrgasOnNAS[$TYPE]}; do
Dockername=${ORGA}-${DockerServNames[$TYPE]}
PARAMS=$(echo $4 | sed -e "s/#ORGA#/${ORGA}/g;s/#DOCKERSERVNAME#/$Dockername/g;s/#ISMAIN#/NON/g;s/#SURNAS#/OUI/g;s/#TYPE#/$TYPE/g;s%#COMPOSEDIR#%${KAZ_COMP_DIR}/${ORGA}-orga%g" )
echo "${RED} ${ORGA}-orga ${NC}($COMPTEUR/${NbOrgas[$TYPE]})" >& $QUIET
eval "$1" $PARAMS
COMPTEUR=$((COMPTEUR + 1))
done
fi
fi
done
}
##############################################
################ COMMANDES ###################
##############################################
Init(){
# Initialisation des containers
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "Initialisation" >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
_executeFunctionForAll "_initContainer" "Initialisation" "${CONTAINERS_TYPES[@]}" "#TYPE# #ISMAIN# #SURNAS# #ORGA# "
}
restart-compose() {
# Parcours les containers et redémarre avec tempo
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "DOCKER-COMPOSE DOWN puis sleep ${TEMPO_ACTION_STOP}" >& $QUIET
echo "DOCKER-COMPOSE UP puis sleep ${TEMPO_ACTION_START}" >& $QUIET
echo "de ${CONTAINERS_TYPES} pour $NB_ORGAS_STR" >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
_executeFunctionForAll "_restartContainerAvecTempo" "Restart" "${CONTAINERS_TYPES[@]}" "#TYPE# #ISMAIN# #COMPOSEDIR#"
${SIMU} sleep ${TEMPO_ACTION_START}
_reloadProxy
echo "--------------------------------------------------------" >& $QUIET
echo "${GREEN}FIN${NC} " >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
}
restart() {
# Parcours les containers et redémarre
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "DOCKER RESTART des ${CONTAINERS_TYPES} pour $NB_ORGAS_STR" >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
_executeFunctionForAll "_restartContainer" "Restart" "${CONTAINERS_TYPES[@]}" "#DOCKERSERVNAME#"
_reloadProxy
echo "--------------------------------------------------------" >& $QUIET
echo "${GREEN}FIN${NC} " >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
}
version(){
# Parcours les containers et affiche leurs versions
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "VERSIONS" >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
_executeFunctionForAll "_versionContainer" "Version" "${CONTAINERS_TYPES[@]}" "#TYPE# #ISMAIN# #ORGA#"
}
listContainers(){
echo "${NC}--------------------------------------------------------"
echo "LISTES"
echo "------------------------------------------------------------"
for TYPE in ${CONTAINERS_TYPES}; do
echo "****************** $TYPE ****************"
_listContainer "$TYPE"
done
}
######################## Fonctions génériques #######################
_initContainer(){
# $1 type
# $2 COMMUN
# $3 ON NAS
# $4 orgas
if [ -n "${subScripts[$1]}" ] ; then
evalStr="${KAZ_BIN_DIR}/${subScripts[$1]} --install"
if [ "$3" = "OUI" ]; then evalStr="${evalStr} -nas" ; fi
if [ ! "$QUIET" = "1" ]; then evalStr="${evalStr} -q" ; fi
if [ -n "$SIMU" ]; then evalStr="${evalStr} -n" ; fi
if [ ! "$2" = "OUI" ]; then evalStr="${evalStr} $4" ; fi
eval $evalStr
fi
}
_restartContainer(){
# $1 Dockername
echo -n "${NC}Redemarrage ... " >& $QUIET
${SIMU}
${SIMU} docker restart $1
echo "${GREEN}OK${NC}" >& $QUIET
}
_restartContainerAvecTempo(){
# $1 type
# $2 main container
# $2 composeDir
dir=$3
if [ -z $dir ]; then return 1; fi # le compose n'existe pas ... par exemple wordpress commun
cd "$dir"
echo -n "${NC}Arrêt ... " >& $QUIET
${SIMU}
if [ "$2" = "OUI" ]; then ${SIMU} docker-compose stop ;
else ${SIMU} docker-compose stop "${serviceNames[$1]}"
fi
${SIMU} sleep ${TEMPO_ACTION_STOP}
echo "${GREEN}OK${NC}" >& $QUIET
echo -n "${NC}Démarrage ... " >& $QUIET
if [ "$2" = "OUI" ]; then ${SIMU} docker-compose up -d ;
else ${SIMU} docker-compose up -d "${serviceNames[$1]}"
fi
${SIMU} sleep ${TEMPO_ACTION_START}
echo "${GREEN}OK${NC}" >& $QUIET
}
_reloadProxy() {
availableProxyComposes=($(getList "${KAZ_CONF_DIR}/container-proxy.list"))
for item in "${availableProxyComposes[@]}"; do
${SIMU} ${KAZ_COMP_DIR}/${item}/reload.sh
done
}
_versionContainer() {
# Affiche la version d'un container donné
# $1 type
# $2 COMMUN
# $3 orgas
if [ -n "${subScripts[$1]}" ] ; then
evalStr="${KAZ_BIN_DIR}/${subScripts[$1]} --version"
if [ ! "$2" = "OUI" ]; then evalStr="${evalStr} $3" ; fi
eval $evalStr
fi
}
_listContainer(){
# pour un type donné (cloud / agora / wiki / wp), fait une synthèse de qui est up et down / nas ou local
# $1 type
RUNNING_FROM_NAS=$(comm -12 <(printf '%s\n' ${OrgasOnNAS[$1]} | sort) <(printf '%s\n' ${RunningOrgas[$1]} | sort) | sed -e ':a;N;$!ba;s/\n/ /g')
RUNNING_LOCAL=$(comm -12 <(printf '%s\n' ${OrgasLocales[$1]} | sort) <(printf '%s\n' ${RunningOrgas[$1]} | sort) | sed -e ':a;N;$!ba;s/\n/ /g')
# tu l'a vu la belle commande pour faire une exclusion de liste
DOWN_ON_NAS=$(comm -23 <(printf '%s\n' ${OrgasOnNAS[$1]} | sort) <(printf '%s\n' ${RunningOrgas[$1]} | sort) | sed -e ':a;N;$!ba;s/\n/ /g')
DOWN_LOCAL=$(comm -23 <(printf '%s\n' ${OrgasLocales[$1]} | sort) <(printf '%s\n' ${RunningOrgas[$1]} | sort)| sed -e ':a;N;$!ba;s/\n/ /g')
NB_SUR_NAS=$(echo ${OrgasOnNAS[$1]} | wc -w)
NB_LOCAUX=$(echo ${OrgasLocales[$1]} | wc -w)
NB_RUNNING_SUR_NAS=$(echo $RUNNING_FROM_NAS | wc -w)
NB_RUNNING_LOCALLY=$(echo $RUNNING_LOCAL | wc -w)
MAIN_RUNNING="${RED}DOWN${NC}"
if docker ps | grep -q " ${DockerServNames[$1]}"
then
MAIN_RUNNING="${GREEN}UP${NC}"
fi
[ -n "${composeDirs[${1}]}" ] && echo "${NC}Le ${1} commun est $MAIN_RUNNING"
if [[ ${NbOrgas[$1]} -gt 0 ]]; then
ENLOCALSTR=
if [[ ${NB_RUNNING_SUR_NAS[$1]} -gt 0 ]]; then ENLOCALSTR=" en local" ; fi
echo "Orgas : $NB_RUNNING_LOCALLY / $NB_LOCAUX running ${1}$ENLOCALSTR"
echo "${NC}UP : ${GREEN}${RUNNING_LOCAL}"
echo "${NC}DOWN : ${RED}$DOWN_LOCAL${NC}"
if [[ ${NB_RUNNING_SUR_NAS[$1]} -gt 0 ]]; then
echo "${NC}Orgas : $NB_RUNNING_SUR_NAS / $NB_SUR_NAS running depuis le NAS :"
echo "${NC}UP : ${GREEN}${RUNNING_FROM_NAS}"
echo "${NC}DOWN : ${RED}$DOWN_ON_NAS${NC}"
fi
fi
}
#########################################################
############# FONCTIONS SPECIFIQUES #####################
#########################################################
##################################
############### CLOUD ############
##################################
UpgradeClouds() {
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "UPGRADE des cloud" >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
RunOCCCommand "upgrade"
}
OptimiseClouds() {
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "Optimisation des cloud" >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
RunOCCCommand "db:add-missing-indices" "db:convert-filecache-bigint --no-interaction"
}
InstallApps(){
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "INSTALL DES APPLIS sur les clouds : ${LISTE_APPS}" >& $QUIET
echo "-------------------------------------------------------------" >& $QUIET
if [ -z "${LISTE_APPS}" ]; then
echo "Aucune appli n'est précisée, j'installe les applis par défaut : ${APPLIS_PAR_DEFAUT}" >& $QUIET
LISTE_APPS="${APPLIS_PAR_DEFAUT}"
fi
PARAMS="-a \"$LISTE_APPS\""
if [ ! "$QUIET" = "1" ]; then PARAMS="${PARAMS} -q" ; fi
if [ -n "$SIMU" ]; then PARAMS="${PARAMS} -n" ; fi
_executeFunctionForAll "${KAZ_BIN_DIR}/${subScripts["cloud"]} -i $PARAMS" "Install des applis" "cloud" "#ORGA#"
}
UpdateApplis() {
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "UPDATE DES APPLIS des cloud : ${LISTE_APPS}" >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
PARAMS="-a ${LISTE_APPS}"
if [ -z "${LISTE_APPS}" ]; then
echo "Aucune appli n'est précisée, je les met toutes à jour! " >& $QUIET
PARAMS=
fi
if [ ! "$QUIET" = "1" ]; then PARAMS="${PARAMS} -q" ; fi
if [ -n "$SIMU" ]; then PARAMS="${PARAMS} -n" ; fi
_executeFunctionForAll "${KAZ_BIN_DIR}/${subScripts["cloud"]} -u $PARAMS" "Maj des applis" "cloud" "#ORGA#"
}
##################################
############### AGORA ############
##################################
PostMessages(){
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "Envoi de messages sur mattermost" >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
for TEAM in "${!Posts[@]}"
do
MSG=${Posts[$TEAM]/\"/\\\"}
PARAMS="-p \"$TEAM\" \"$MSG\""
if [ ! "$QUIET" = "1" ]; then PARAMS="${PARAMS} -q" ; fi
if [ -n "$SIMU" ]; then PARAMS="${PARAMS} -n" ; fi
_executeFunctionForAll "${KAZ_BIN_DIR}/${subScripts["agora"]} $PARAMS" "Post vers $TEAM sur l'agora" "agora" "#ORGA#"
done
}
########## LANCEMENT COMMANDES OCC / MMCTL ############
RunCommands() {
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "Envoi de commandes en direct" >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
# $1 OCC / MMCTL / EXEC
# $ suivants : les commandes
for command in "${@:2}"
do
if [ $1 = "OCC" ]; then RunOCCCommand "$command" ; fi
if [ $1 = "MMCTL" ]; then RunMMCTLCommand "$command" ; fi
if [ $1 = "EXEC" ]; then RunEXECCommand "$command" ; fi
done
}
_runSingleOccCommand(){
# $1 Command
# $2 Dockername
${SIMU} docker exec -u 33 $2 /var/www/html/occ $1
}
_runSingleMmctlCommand(){
# $1 Command
# $2 Dockername
${SIMU} docker exec $2 bin/mmctl $1
}
_runSingleExecCommand(){
# $1 Command
# $2 Dockername
${SIMU} docker exec $2 $1
}
RunOCCCommand() {
# $1 Command
_executeFunctionForAll "_runSingleOccCommand \"${1}\"" "OCC $1" "cloud" "#DOCKERSERVNAME#"
}
RunMMCTLCommand() {
# $1 Command
_executeFunctionForAll "_runSingleMmctlCommand \"${1}\"" "MMCTL $1" "agora" "#DOCKERSERVNAME#"
}
RunEXECCommand() {
# $1 Command
_executeFunctionForAll "_runSingleExecCommand \"${1}\"" "docker exec $1" "${CONTAINERS_TYPES[@]}" "#DOCKERSERVNAME#"
}
########## Main #################
for ARG in "$@"; do
if [ -n "${GETOCCCOMAND}" ]; then # après un -occ
OCCCOMANDS+=("${ARG}")
GETOCCCOMAND=
elif [ -n "${GETEXECCOMAND}" ]; then # après un -exec
EXECCOMANDS+=("${ARG}")
GETEXECCOMAND=
elif [ -n "${GETAPPS}" ]; then # après un -a
LISTE_APPS="${LISTE_APPS} ${ARG}"
GETAPPS=""
elif [ -n "${GETMMCTLCOMAND}" ]; then # après un -mmctl
MMCTLCOMANDS+=("${ARG}")
GETMMCTLCOMAND=
elif [ -n "${GETTEAM}" ]; then # après un --post
GETMESSAGE="now"
GETTEAM=""
TEAM="${ARG}"
elif [ -n "${GETMESSAGE}" ]; then # après un --post "team:channel"
if [[ $TEAM == "-*" && ${#TEAM} -le 5 ]]; then echo "J'envoie mon message à \"${TEAM}\" ?? Arf, ça me plait pas j'ai l'impression que tu t'es planté sur la commande."; usage ; exit 1 ; fi
if [[ $ARG == "-*" && ${#ARG} -le 5 ]]; then echo "J'envoie le message \"${ARG}\" ?? Arf, ça me plait pas j'ai l'impression que tu t'es planté sur la commande."; usage ; exit 1 ; fi
if [[ ! $TEAM =~ .*:.+ ]]; then echo "Il faut mettre un destinataire sous la forme team:channel. Recommence !"; usage ; exit 1 ; fi
Posts+=( ["${TEAM}"]="$ARG" )
GETMESSAGE=""
TEAM=""
else
case "${ARG}" in
'-h' | '--help' )
usage && exit ;;
'-n' | '--simu')
SIMU="echo" ;;
'-q' )
QUIET="/dev/null" ;;
'-m' | '--main' )
OPERATE_ON_MAIN="OUI-OUI" ;;
'-M' )
AVAILABLE_ORGAS= && OPERATE_ON_MAIN="OUI-OUI" ;; #pas d'orgas
'--nas' | '-nas' )
OPERATE_LOCAL_ORGA= ;; # pas les locales
'--local' | '-local' )
OPERATE_ON_NAS_ORGA= ;; # pas celles sur NAS
'-cloud'|'--cloud')
CONTAINERS_TYPES="${CONTAINERS_TYPES} cloud" ;;
'-agora'|'--agora'|'-mm'|'--mm'|'-matter'*|'--matter'*)
CONTAINERS_TYPES="${CONTAINERS_TYPES} agora" ;;
'-wiki'|'--wiki')
CONTAINERS_TYPES="${CONTAINERS_TYPES} wiki" ;;
'-wp'|'--wp')
CONTAINERS_TYPES="${CONTAINERS_TYPES} wp" ;;
'-office'|'--office'|'-collab'*|'--collab'*)
CONTAINERS_TYPES="${CONTAINERS_TYPES} office" ;;
'-paheko'|'--paheko')
CONTAINERS_TYPES="${CONTAINERS_TYPES} paheko" ;;
'-pod'|'--pod'|'-castopod'|'--castopod')
CONTAINERS_TYPES="${CONTAINERS_TYPES} castopod" ;;
'-t' )
COMMANDS="${COMMANDS} RESTART-COMPOSE" ;;
'-r' )
COMMANDS="${COMMANDS} RESTART-DOCKER" ;;
'-l' | '--list' )
COMMANDS="$(echo "${COMMANDS} LIST" | sed "s/\s/\n/g" | sort | uniq)" ;;
'-v' | '--version')
COMMANDS="$(echo "${COMMANDS} VERSION" | sed "s/\s/\n/g" | sort | uniq)" ;;
'-I' | '--install' )
COMMANDS="$(echo "${COMMANDS} INIT" | sed "s/\s/\n/g" | sort | uniq)" ;; # le sed sort uniq, c'est pour pas l'avoir en double
'-U' | '--upgrade')
COMMANDS="$(echo "${COMMANDS} UPGRADE" | sed "s/\s/\n/g" | sort | uniq)" ;;
'--optim' )
COMMANDS="$(echo "${COMMANDS} OPTIMISE-CLOUD" | sed "s/\s/\n/g" | sort | uniq)" ;;
'-u' )
COMMANDS="$(echo "${COMMANDS} UPDATE-CLOUD-APP" | sed "s/\s/\n/g" | sort | uniq)" ;;
'-i' )
COMMANDS="$(echo "${COMMANDS} INSTALL-CLOUD-APP" | sed "s/\s/\n/g" | sort | uniq)" ;;
'-a' )
GETAPPS="now" ;;
'-occ' )
COMMANDS="$(echo "${COMMANDS} RUN-CLOUD-OCC" | sed "s/\s/\n/g" | sort | uniq)"
GETOCCCOMAND="now" ;;
'-mmctl' )
COMMANDS="$(echo "${COMMANDS} RUN-AGORA-MMCTL" | sed "s/\s/\n/g" | sort | uniq)"
GETMMCTLCOMAND="now" ;;
'-exec' )
COMMANDS="$(echo "${COMMANDS} RUN-DOCKER-EXEC" | sed "s/\s/\n/g" | sort | uniq)"
GETEXECCOMAND="now" ;;
'-p' | '--post' )
COMMANDS="$(echo "${COMMANDS} POST-AGORA" | sed "s/\s/\n/g" | sort | uniq)"
GETTEAM="now" ;;
'-*' ) # ignore
;;
*)
GIVEN_ORGA="${GIVEN_ORGA} ${ARG%-orga}"
;;
esac
fi
done
if [[ "${COMMANDS[*]}" =~ "RESTART-COMPOSE" && "${COMMANDS[*]}" =~ "RESTART-TYPE" ]]; then
echo "Je restarte via docker-compose ou via docker mais pas les deux !"
usage
exit 1
fi
if [ -z "${COMMANDS}" ]; then
usage && exit
fi
if [ -n "${GIVEN_ORGA}" ]; then
# intersection des 2 listes : quelle commande de ouf !!
AVAILABLE_ORGAS=$(comm -12 <(printf '%s\n' ${AVAILABLE_ORGAS} | sort) <(printf '%s\n' ${GIVEN_ORGA} | sort))
fi
NB_ORGAS=$(echo "${AVAILABLE_ORGAS}" | wc -w )
if [[ $NB_ORGAS = 0 && -z "${OPERATE_ON_MAIN}" ]]; then
echo "Aucune orga trouvée."
exit 1
fi
NB_ORGAS_STR="$NB_ORGAS orgas"
[ -n "${OPERATE_ON_MAIN}" ] && NB_ORGAS_STR="$NB_ORGAS_STR + les communs"
_populate_lists # on récupère les clouds / agora / wiki / wp correspondants aux orga
if [[ $NB_ORGAS -gt 2 && "${COMMANDS[*]}" =~ 'INIT' ]]; then
ETLECLOUDCOMMUN=
[ -n "${OPERATE_ON_MAIN}" ] && ETLECLOUDCOMMUN=" ainsi que les containers commun"
echo "On s'apprête à initialiser les ${CONTAINERS_TYPES} suivants : ${AVAILABLE_ORGAS}${ETLECLOUDCOMMUN}"
checkContinue
fi
for COMMAND in ${COMMANDS}; do
case "${COMMAND}" in
'LIST' )
listContainers && exit ;;
'VERSION' )
version && exit ;;
'OPTIMISE-CLOUD' )
OptimiseClouds ;;
'RESTART-COMPOSE' )
restart-compose ;;
'RESTART-DOCKER' )
restart ;;
'UPDATE-CLOUD-APP' )
UpdateApplis ;;
'UPGRADE' )
UpgradeClouds ;;
'INIT' )
Init ;;
'INSTALL-CLOUD-APP' )
InstallApps ;;
'RUN-CLOUD-OCC' )
RunCommands "OCC" "${OCCCOMANDS[@]}" ;;
'RUN-AGORA-MMCTL' )
RunCommands "MMCTL" "${MMCTLCOMANDS[@]}" ;;
'RUN-DOCKER-EXEC' )
RunCommands "EXEC" "${EXECCOMANDS[@]}" ;;
'POST-AGORA' )
PostMessages ${Posts} ;;
esac
done

659
bin/gestContainers_v2.sh Executable file
View File

@ -0,0 +1,659 @@
#!/bin/bash
# Script de manipulation des containers en masse
# init /versions / restart ...
#
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
PRG=$(basename $0)
tab_sites_destinations_possibles=($(get_Serveurs_Kaz))
# ${SIMU} ssh -p 2201 root@${SITE_DST}.${domain} "${CMD}"
# SITE_DST="${tab_sites_destinations_possibles[1]}"
# ${tab_sites_destinations_possibles[@]}
#GLOBAL VARS
NAS_VOL="/mnt/disk-nas1/docker/volumes/"
availableOrga=($(getList "${KAZ_CONF_DIR}/container-orga.list"))
AVAILABLE_ORGAS=${availableOrga[*]//-orga/}
availableContainersCommuns=( $(getList "${KAZ_CONF_DIR}/container-withMail.list") $(getList "${KAZ_CONF_DIR}/container-withoutMail.list"))
OPERATE_ON_MAIN= # par defaut NON on ne traite que des orgas
OPERATE_ON_NAS_ORGA="OUI" # par defaut oui, on va aussi sur les orgas du NAS
OPERATE_LOCAL_ORGA="OUI" # par defaut oui
TEMPO_ACTION_STOP=2 # Lors de redémarrage avec tempo, on attend après le stop
TEMPO_ACTION_START=120 # Lors de redémarrage avec tempo, avant de reload le proxy
CONTAINERS_TYPES=
defaultContainersTypes="cloud agora wp wiki office paheko" # les containers gérés par ce script.
declare -A DockerServNames # le nom des containers correspondant
DockerServNames=( [cloud]="${nextcloudServName}" [agora]="${mattermostServName}" [wiki]="${dokuwikiServName}" [wp]="${wordpressServName}" [office]="${officeServName}" [paheko]="${pahekoServName}" )
declare -A FilterLsVolume # Pour trouver quel volume appartient à quel container
FilterLsVolume=( [cloud]="cloudMain" [agora]="matterConfig" [wiki]="wikiConf" [wp]="wordpress" )
declare -A composeDirs # Le nom du repertoire compose pour le commun
composeDirs=( [cloud]="cloud" [agora]="mattermost" [wiki]="dokuwiki" [office]="collabora" [paheko]="paheko" )
declare -A serviceNames # Le nom du du service dans le dockerfile d'orga
serviceNames=( [cloud]="cloud" [agora]="agora" [wiki]="dokuwiki" [wp]="wordpress" [office]="collabora")
declare -A subScripts
subScripts=( [cloud]="manageCloud.sh" [agora]="manageAgora.sh" [wiki]="manageWiki.sh" [wp]="manageWp.sh" )
declare -A OrgasOnNAS
declare -A OrgasLocales
declare -A NbOrgas
declare -A RunningOrgas
declare -A Posts
QUIET="1" # redirection des echo
OCCCOMANDS=()
MMCTLCOMANDS=()
EXECCOMANDS=()
# CLOUD
APPLIS_PAR_DEFAUT="tasks calendar contacts bookmarks richdocuments external drawio snappymail"
usage() {
echo "${PRG} [OPTION] [CONTAINERS_TYPES] [COMMANDES] [SERVEURS] [ORGAS]
Ce script regroupe l'ensemble des opérations que l'on souhaite automatiser sur plusieurs containers, sur un ou plusieurs sites.
Par defaut, sur les orgas, mais on peut aussi ajouter les communs
OPTIONS
-h|--help Cette aide :-)
-n|--simu SIMULATION
-q|--quiet On ne parle pas (utile avec le -n pour avoir que les commandes)
-m|--main Traite aussi le container commun (cloud commun / agora commun / wiki commun)
-M Ne traite que le container commun, et pas les orgas
--nas Ne traite QUE les orgas sur le NAS
--local Ne traite pas les orgas sur le NAS
-v|--version Donne la version des containers et signale les MàJ
-l|--list Liste des containers (up / down, local ou nas) de cette machine
CONTAINERS_TYPES
-cloud Pour agir sur les clouds
-agora Pour agir sur les agoras
-wp Les wp
-wiki Les wiki
-office Les collabora
COMMANDES (on peut en mettre plusieurs dans l'ordre souhaité)
-I|--install L'initialisation du container
-t Redémarre avec tempo (docker-compose down puis sleep ${TEMPO_ACTION_STOP} puis up puis sleep ${TEMPO_ACTION_START})
-r Redémarre sans tempo (docker restart)
-exec \"command\" Envoie une commande docker exec
--optim Lance la procédure Nextcloud pour optimiser les performances ** **
-occ \"command\" Envoie une commande via occ ** **
-u Mets à jour les applis ** SPECIFIQUES **
-i Install des applis ** CLOUD **
-a \"app1 app2 ...\" Choix des appli à installer ou mettre à jour (entre guillemets) ** **
-U|--upgrade Upgrade des clouds ** **
-mmctl \"command\" Envoie une commande via mmctl ** SPECIFIQUES **
-p|--post \"team\" \"message\" Poste un message dans une team agora ** AGORA **
SERVEURS
--all-srv Lance sur tous les serveurs ${tab_sites_destinations_possibles[@]}, sinon c'est uniquement sur ${site}
ORGAS sur ${site}
[orga1 orga2 ... ] on peut filtrer parmi : ${AVAILABLE_ORGAS}
Exemples :
${PRG} -office -m -r # restart de tous les collaboras (libére RAM)
${PRG} -cloud -u -r -q -n # affiche toutes les commandes (-n -q ) pour mettre à jour toutes les applis des clouds + restart (-u -r)
${PRG} -p \"monorga:town-square\" \"Hello\" monorga # envoie Hello sur le centreville de l'orga monorga sur son mattermost dédié
${PRG} -cloud -occ \"config:system:set default_phone_region --value='FR'\" --all-srv # modifie la variable default_phone_region dans le config.php de tous les clouds de tous les serveurs
"
}
####################################################
################ fonctions clefs ###################
####################################################
_populate_lists(){
# récupère les listes d'orga à traiter
# on rempli les tableaux OrgasOnNAS / OrgasLocales / NbOrgas ... par type de container
if [ -z "${CONTAINERS_TYPES}" ]; then
# wow, on traite tout le monde d'un coup...
CONTAINERS_TYPES="$defaultContainersTypes"
fi
for TYPE in ${CONTAINERS_TYPES}; do
if [ -n "${FilterLsVolume[$TYPE]}" ] ; then # on regarde dans les volumes
[ -n "$OPERATE_ON_NAS_ORGA" ] && OrgasOnNAS["$TYPE"]=$( _getListOrgas ${NAS_VOL} ${FilterLsVolume[$TYPE]} )
[ -n "$OPERATE_LOCAL_ORGA" ] && OrgasLocales["$TYPE"]=$( _getListOrgas ${DOCK_VOL} ${FilterLsVolume[$TYPE]} "SANSLN")
else # un docker ps s'il n'y a pas de volumes
[ -n "$OPERATE_LOCAL_ORGA" ] && OrgasLocales["$TYPE"]=$(docker ps --format '{{.Names}}' | grep ${DockerServNames[$TYPE]} | sed -e "s/-*${DockerServNames[$TYPE]}//")
fi
NbOrgas["$TYPE"]=$(($(echo ${OrgasOnNAS["$TYPE"]} | wc -w) + $(echo ${OrgasLocales["$TYPE"]} | wc -w)))
RunningOrgas["$TYPE"]=$(docker ps --format '{{.Names}}' | grep ${DockerServNames[$TYPE]} | sed -e "s/-*${DockerServNames[$TYPE]}//")
done
}
_getListOrgas(){
# retrouve les orgas à partir des volume présents
# $1 where to lookup
# $2 filter
# $3 removeSymbolicLinks
[ ! -d $1 ] || [ -z "$2" ] && return 1 # si le repertoire n'existe pas on skip
LIST=$(ls "${1}" | grep -i orga | grep -i "$2" | sed -e "s/-${2}$//g" | sed -e 's/^orga_//')
[ -n "$3" ] && LIST=$(ls -F "${1}" | grep '/' | grep -i orga | grep -i "$2" | sed -e "s/-${2}\/$//g" | sed -e 's/^orga_//')
LIST=$(comm -12 <(printf '%s\n' ${LIST} | sort) <(printf '%s\n' ${AVAILABLE_ORGAS} | sort))
echo "$LIST"
}
_executeFunctionForAll(){
# Parcours des container et lancement des commandes
# Les commandes ont en derniers paramètres le type et l'orga et une string parmi KAZ/ORGANAS/ORGALOCAL pour savoir sur quoi on opère
# $1 function
# $2 nom de la commande
# $3 quel types de containers
# $4 params : quels paramètres à passer à la commande (les clefs sont #ORGA# #DOCKERSERVNAME# #SURNAS# #ISMAIN# #TYPE# #COMPOSEDIR# )
for TYPE in ${3}; do
if [ -n "$OPERATE_ON_MAIN" ]; then
if [[ -n "${composeDirs[$TYPE]}" && "${availableContainersCommuns[*]}" =~ "${composeDirs[$TYPE]}" ]]; then # pas de cloud / agora / wp / wiki sur cette instance
Dockername=${DockerServNames[$TYPE]}
PARAMS=$(echo $4 | sed -e "s/#ORGA#//g;s/#DOCKERSERVNAME#/$Dockername/g;s/#ISMAIN#/OUI/g;s/#SURNAS#/NON/g;s/#TYPE#/$TYPE/g;s%#COMPOSEDIR#%${KAZ_COMP_DIR}/${composeDirs[$TYPE]}%g" )
echo "-------- $2 $TYPE COMMUN ----------------------------" >& $QUIET
eval "$1" $PARAMS
fi
fi
if [[ ${NbOrgas[$TYPE]} -gt 0 ]]; then
echo "-------- $2 des $TYPE des ORGAS ----------------------------" >& $QUIET
COMPTEUR=1
if [ -n "$OPERATE_LOCAL_ORGA" ]; then
for ORGA in ${OrgasLocales[$TYPE]}; do
Dockername=${ORGA}-${DockerServNames[$TYPE]}
PARAMS=$(echo $4 | sed -e "s/#ORGA#/${ORGA}/g;s/#DOCKERSERVNAME#/$Dockername/g;s/#ISMAIN#/NON/g;s/#SURNAS#/NON/g;s/#TYPE#/$TYPE/g;s%#COMPOSEDIR#%${KAZ_COMP_DIR}/${ORGA}-orga%g" )
echo "${RED} ${ORGA}-orga ${NC}($COMPTEUR/${NbOrgas[$TYPE]})" >& $QUIET
eval "$1" $PARAMS
COMPTEUR=$((COMPTEUR + 1))
done
fi
if [ -n "$OPERATE_ON_NAS_ORGA" ]; then
for ORGA in ${OrgasOnNAS[$TYPE]}; do
Dockername=${ORGA}-${DockerServNames[$TYPE]}
PARAMS=$(echo $4 | sed -e "s/#ORGA#/${ORGA}/g;s/#DOCKERSERVNAME#/$Dockername/g;s/#ISMAIN#/NON/g;s/#SURNAS#/OUI/g;s/#TYPE#/$TYPE/g;s%#COMPOSEDIR#%${KAZ_COMP_DIR}/${ORGA}-orga%g" )
echo "${RED} ${ORGA}-orga ${NC}($COMPTEUR/${NbOrgas[$TYPE]})" >& $QUIET
eval "$1" $PARAMS
COMPTEUR=$((COMPTEUR + 1))
done
fi
fi
done
}
##############################################
################ COMMANDES ###################
##############################################
Init(){
# Initialisation des containers
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "Initialisation" >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
_executeFunctionForAll "_initContainer" "Initialisation" "${CONTAINERS_TYPES[@]}" "#TYPE# #ISMAIN# #SURNAS# #ORGA# "
}
restart-compose() {
# Parcours les containers et redémarre avec tempo
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "DOCKER-COMPOSE DOWN puis sleep ${TEMPO_ACTION_STOP}" >& $QUIET
echo "DOCKER-COMPOSE UP puis sleep ${TEMPO_ACTION_START}" >& $QUIET
echo "de ${CONTAINERS_TYPES} pour $NB_ORGAS_STR" >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
_executeFunctionForAll "_restartContainerAvecTempo" "Restart" "${CONTAINERS_TYPES[@]}" "#TYPE# #ISMAIN# #COMPOSEDIR#"
${SIMU} sleep ${TEMPO_ACTION_START}
_reloadProxy
echo "--------------------------------------------------------" >& $QUIET
echo "${GREEN}FIN${NC} " >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
}
restart() {
# Parcours les containers et redémarre
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "DOCKER RESTART des ${CONTAINERS_TYPES} pour $NB_ORGAS_STR" >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
_executeFunctionForAll "_restartContainer" "Restart" "${CONTAINERS_TYPES[@]}" "#DOCKERSERVNAME#"
_reloadProxy
echo "--------------------------------------------------------" >& $QUIET
echo "${GREEN}FIN${NC} " >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
}
version(){
# Parcours les containers et affiche leurs versions
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "VERSIONS" >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
_executeFunctionForAll "_versionContainer" "Version" "${CONTAINERS_TYPES[@]}" "#TYPE# #ISMAIN# #ORGA#"
}
listContainers(){
echo "${NC}--------------------------------------------------------"
echo "LISTES"
echo "------------------------------------------------------------"
for TYPE in ${CONTAINERS_TYPES}; do
echo "****************** $TYPE ****************"
_listContainer "$TYPE"
done
}
######################## Fonctions génériques #######################
_initContainer(){
# $1 type
# $2 COMMUN
# $3 ON NAS
# $4 orgas
if [ -n "${subScripts[$1]}" ] ; then
evalStr="${KAZ_BIN_DIR}/${subScripts[$1]} --install"
if [ "$3" = "OUI" ]; then evalStr="${evalStr} -nas" ; fi
if [ ! "$QUIET" = "1" ]; then evalStr="${evalStr} -q" ; fi
if [ -n "$SIMU" ]; then evalStr="${evalStr} -n" ; fi
if [ ! "$2" = "OUI" ]; then evalStr="${evalStr} $4" ; fi
eval $evalStr
fi
}
_restartContainer(){
# $1 Dockername
echo -n "${NC}Redemarrage ... " >& $QUIET
${SIMU}
${SIMU} docker restart $1
echo "${GREEN}OK${NC}" >& $QUIET
}
_restartContainerAvecTempo(){
# $1 type
# $2 main container
# $2 composeDir
dir=$3
if [ -z $dir ]; then return 1; fi # le compose n'existe pas ... par exemple wordpress commun
cd "$dir"
echo -n "${NC}Arrêt ... " >& $QUIET
${SIMU}
if [ "$2" = "OUI" ]; then ${SIMU} docker-compose stop ;
else ${SIMU} docker-compose stop "${serviceNames[$1]}"
fi
${SIMU} sleep ${TEMPO_ACTION_STOP}
echo "${GREEN}OK${NC}" >& $QUIET
echo -n "${NC}Démarrage ... " >& $QUIET
if [ "$2" = "OUI" ]; then ${SIMU} docker-compose up -d ;
else ${SIMU} docker-compose up -d "${serviceNames[$1]}"
fi
${SIMU} sleep ${TEMPO_ACTION_START}
echo "${GREEN}OK${NC}" >& $QUIET
}
_reloadProxy() {
availableProxyComposes=($(getList "${KAZ_CONF_DIR}/container-proxy.list"))
for item in "${availableProxyComposes[@]}"; do
${SIMU} ${KAZ_COMP_DIR}/${item}/reload.sh
done
}
_versionContainer() {
# Affiche la version d'un container donné
# $1 type
# $2 COMMUN
# $3 orgas
if [ -n "${subScripts[$1]}" ] ; then
evalStr="${KAZ_BIN_DIR}/${subScripts[$1]} --version"
if [ ! "$2" = "OUI" ]; then evalStr="${evalStr} $3" ; fi
eval $evalStr
fi
}
_listContainer(){
# pour un type donné (cloud / agora / wiki / wp), fait une synthèse de qui est up et down / nas ou local
# $1 type
RUNNING_FROM_NAS=$(comm -12 <(printf '%s\n' ${OrgasOnNAS[$1]} | sort) <(printf '%s\n' ${RunningOrgas[$1]} | sort) | sed -e ':a;N;$!ba;s/\n/ /g')
RUNNING_LOCAL=$(comm -12 <(printf '%s\n' ${OrgasLocales[$1]} | sort) <(printf '%s\n' ${RunningOrgas[$1]} | sort) | sed -e ':a;N;$!ba;s/\n/ /g')
# tu l'a vu la belle commande pour faire une exclusion de liste
DOWN_ON_NAS=$(comm -23 <(printf '%s\n' ${OrgasOnNAS[$1]} | sort) <(printf '%s\n' ${RunningOrgas[$1]} | sort) | sed -e ':a;N;$!ba;s/\n/ /g')
DOWN_LOCAL=$(comm -23 <(printf '%s\n' ${OrgasLocales[$1]} | sort) <(printf '%s\n' ${RunningOrgas[$1]} | sort)| sed -e ':a;N;$!ba;s/\n/ /g')
NB_SUR_NAS=$(echo ${OrgasOnNAS[$1]} | wc -w)
NB_LOCAUX=$(echo ${OrgasLocales[$1]} | wc -w)
NB_RUNNING_SUR_NAS=$(echo $RUNNING_FROM_NAS | wc -w)
NB_RUNNING_LOCALLY=$(echo $RUNNING_LOCAL | wc -w)
MAIN_RUNNING="${RED}DOWN${NC}"
if docker ps | grep -q " ${DockerServNames[$1]}"
then
MAIN_RUNNING="${GREEN}UP${NC}"
fi
[ -n "${composeDirs[${1}]}" ] && echo "${NC}Le ${1} commun est $MAIN_RUNNING"
if [[ ${NbOrgas[$1]} -gt 0 ]]; then
ENLOCALSTR=
if [[ ${NB_RUNNING_SUR_NAS[$1]} -gt 0 ]]; then ENLOCALSTR=" en local" ; fi
echo "Orgas : $NB_RUNNING_LOCALLY / $NB_LOCAUX running ${1}$ENLOCALSTR"
echo "${NC}UP : ${GREEN}${RUNNING_LOCAL}"
echo "${NC}DOWN : ${RED}$DOWN_LOCAL${NC}"
if [[ ${NB_RUNNING_SUR_NAS[$1]} -gt 0 ]]; then
echo "${NC}Orgas : $NB_RUNNING_SUR_NAS / $NB_SUR_NAS running depuis le NAS :"
echo "${NC}UP : ${GREEN}${RUNNING_FROM_NAS}"
echo "${NC}DOWN : ${RED}$DOWN_ON_NAS${NC}"
fi
fi
}
#########################################################
############# FONCTIONS SPECIFIQUES #####################
#########################################################
##################################
############### CLOUD ############
##################################
UpgradeClouds() {
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "UPGRADE des cloud" >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
RunOCCCommand "upgrade"
}
OptimiseClouds() {
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "Optimisation des cloud" >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
RunOCCCommands "db:add-missing-indices" "db:convert-filecache-bigint --no-interaction"
}
InstallApps(){
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "INSTALL DES APPLIS sur les clouds : ${LISTE_APPS}" >& $QUIET
echo "-------------------------------------------------------------" >& $QUIET
if [ -z "${LISTE_APPS}" ]; then
echo "Aucune appli n'est précisée, j'installe les applis par défaut : ${APPLIS_PAR_DEFAUT}" >& $QUIET
LISTE_APPS="${APPLIS_PAR_DEFAUT}"
fi
PARAMS="-a \"$LISTE_APPS\""
if [ ! "$QUIET" = "1" ]; then PARAMS="${PARAMS} -q" ; fi
if [ -n "$SIMU" ]; then PARAMS="${PARAMS} -n" ; fi
_executeFunctionForAll "${KAZ_BIN_DIR}/${subScripts["cloud"]} -i $PARAMS" "Install des applis" "cloud" "#ORGA#"
}
UpdateApplis() {
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "UPDATE DES APPLIS des cloud : ${LISTE_APPS}" >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
PARAMS="-a ${LISTE_APPS}"
if [ -z "${LISTE_APPS}" ]; then
echo "Aucune appli n'est précisée, je les met toutes à jour! " >& $QUIET
PARAMS=
fi
if [ ! "$QUIET" = "1" ]; then PARAMS="${PARAMS} -q" ; fi
if [ -n "$SIMU" ]; then PARAMS="${PARAMS} -n" ; fi
_executeFunctionForAll "${KAZ_BIN_DIR}/${subScripts["cloud"]} -u $PARAMS" "Maj des applis" "cloud" "#ORGA#"
}
##################################
############### AGORA ############
##################################
PostMessages(){
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "Envoi de messages sur mattermost" >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
for TEAM in "${!Posts[@]}"
do
MSG=${Posts[$TEAM]/\"/\\\"}
PARAMS="-p \"$TEAM\" \"$MSG\""
if [ ! "$QUIET" = "1" ]; then PARAMS="${PARAMS} -q" ; fi
if [ -n "$SIMU" ]; then PARAMS="${PARAMS} -n" ; fi
_executeFunctionForAll "${KAZ_BIN_DIR}/${subScripts["agora"]} $PARAMS" "Post vers $TEAM sur l'agora" "agora" "#ORGA#"
done
}
########## LANCEMENT COMMANDES OCC / MMCTL ############
RunCommands() {
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "Envoi de commandes en direct" >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
# $1 OCC / MMCTL / EXEC
# $ suivants : les commandes
for command in "${@:2}"
do
if [ $1 = "OCC" ]; then RunOCCCommand "$command" ; fi
if [ $1 = "MMCTL" ]; then RunMMCTLCommand "$command" ; fi
if [ $1 = "EXEC" ]; then RunEXECCommand "$command" ; fi
done
}
_runSingleOccCommand(){
# $1 Command
# $2 Dockername
${SIMU} docker exec -u 33 $2 /var/www/html/occ $1
}
_runSingleMmctlCommand(){
# $1 Command
# $2 Dockername
${SIMU} docker exec $2 bin/mmctl $1
}
_runSingleExecCommand(){
# $1 Command
# $2 Dockername
${SIMU} ssh -p 2201 root@${SITE_DST}.${domain} docker exec $2 $1
}
RunOCCCommand() {
# $1 Command
_executeFunctionForAll "_runSingleOccCommand \"${1}\"" "OCC $1" "cloud" "#DOCKERSERVNAME#"
}
RunMMCTLCommand() {
# $1 Command
_executeFunctionForAll "_runSingleMmctlCommand \"${1}\"" "MMCTL $1" "agora" "#DOCKERSERVNAME#"
}
RunEXECCommand() {
# $1 Command
_executeFunctionForAll "_runSingleExecCommand \"${1}\"" "docker exec $1" "${CONTAINERS_TYPES[@]}" "#DOCKERSERVNAME#"
}
########## Contrôle #################
for ARG in "$@"; do
# Seul PROD1 peut attaquer tous les autres serveurs kaz sinon un serveur kaz peut juste s'attaquer lui-même (aie!)
if [ "${ARG}" == "--all-srv" -a "${site}" != "prod1" ]; then
echo "${RED}--all-srv choisi alors qu'on n'est pas sur prod1 : impossible, on quitte${NC}"
# mais pour l'instant on autorise pour les tests
# exit
fi
done
########## Main #################
for ARG in "$@"; do
#echo "${ARG}"
if [ -n "${GETOCCCOMAND}" ]; then # après un -occ
OCCCOMANDS+=("${ARG}")
GETOCCCOMAND=
elif [ -n "${GETEXECCOMAND}" ]; then # après un -exec
EXECCOMANDS+=("${ARG}")
GETEXECCOMAND=
elif [ -n "${GETAPPS}" ]; then # après un -a
LISTE_APPS="${LISTE_APPS} ${ARG}"
GETAPPS=""
elif [ -n "${GETMMCTLCOMAND}" ]; then # après un -mmctl
MMCTLCOMANDS+=("${ARG}")
GETMMCTLCOMAND=
elif [ -n "${GETTEAM}" ]; then # après un --post
GETMESSAGE="now"
GETTEAM=""
TEAM="${ARG}"
elif [ -n "${GETMESSAGE}" ]; then # après un --post "team:channel"
if [[ $TEAM == "-*" && ${#TEAM} -le 5 ]]; then echo "J'envoie mon message à \"${TEAM}\" ?? Arf, ça me plait pas j'ai l'impression que tu t'es planté sur la commande."; usage ; exit 1 ; fi
if [[ $ARG == "-*" && ${#ARG} -le 5 ]]; then echo "J'envoie le message \"${ARG}\" ?? Arf, ça me plait pas j'ai l'impression que tu t'es planté sur la commande."; usage ; exit 1 ; fi
if [[ ! $TEAM =~ .*:.+ ]]; then echo "Il faut mettre un destinataire sous la forme team:channel. Recommence !"; usage ; exit 1 ; fi
Posts+=( ["${TEAM}"]="$ARG" )
GETMESSAGE=""
TEAM=""
else
case "${ARG}" in
'-h' | '--help' )
usage && exit ;;
'-n' | '--simu')
SIMU="echo" ;;
'-q' )
QUIET="/dev/null" ;;
'-m' | '--main' )
OPERATE_ON_MAIN="OUI-OUI" ;;
'-M' )
AVAILABLE_ORGAS= && OPERATE_ON_MAIN="OUI-OUI" ;; #pas d'orgas
'--nas' | '-nas' )
OPERATE_LOCAL_ORGA= ;; # pas les locales
'--local' | '-local' )
OPERATE_ON_NAS_ORGA= ;; # pas celles sur NAS
'-cloud'|'--cloud')
CONTAINERS_TYPES="${CONTAINERS_TYPES} cloud" ;;
'-agora'|'--agora')
CONTAINERS_TYPES="${CONTAINERS_TYPES} agora" ;;
'-wiki'|'--wiki')
CONTAINERS_TYPES="${CONTAINERS_TYPES} wiki" ;;
'-wp'|'--wp')
CONTAINERS_TYPES="${CONTAINERS_TYPES} wp" ;;
'-office'|'--office')
CONTAINERS_TYPES="${CONTAINERS_TYPES} office" ;;
'-t' )
COMMANDS="${COMMANDS} RESTART-COMPOSE" ;;
'-r' )
COMMANDS="${COMMANDS} RESTART-DOCKER" ;;
'-l' | '--list' )
COMMANDS="$(echo "${COMMANDS} LIST" | sed "s/\s/\n/g" | sort | uniq)" ;;
'-v' | '--version')
COMMANDS="$(echo "${COMMANDS} VERSION" | sed "s/\s/\n/g" | sort | uniq)" ;;
'-I' | '--install' )
COMMANDS="$(echo "${COMMANDS} INIT" | sed "s/\s/\n/g" | sort | uniq)" ;; # le sed sort uniq, c'est pour pas l'avoir en double
'-U' | '--upgrade')
COMMANDS="$(echo "${COMMANDS} UPGRADE" | sed "s/\s/\n/g" | sort | uniq)" ;;
'--optim' )
COMMANDS="$(echo "${COMMANDS} OPTIMISE-CLOUD" | sed "s/\s/\n/g" | sort | uniq)" ;;
'-u' )
COMMANDS="$(echo "${COMMANDS} UPDATE-CLOUD-APP" | sed "s/\s/\n/g" | sort | uniq)" ;;
'-i' )
COMMANDS="$(echo "${COMMANDS} INSTALL-CLOUD-APP" | sed "s/\s/\n/g" | sort | uniq)" ;;
'-a' )
GETAPPS="now" ;;
'-occ' )
COMMANDS="$(echo "${COMMANDS} RUN-CLOUD-OCC" | sed "s/\s/\n/g" | sort | uniq)"
GETOCCCOMAND="now" ;;
'-mmctl' )
COMMANDS="$(echo "${COMMANDS} RUN-AGORA-MMCTL" | sed "s/\s/\n/g" | sort | uniq)"
GETMMCTLCOMAND="now" ;;
'-exec' )
COMMANDS="$(echo "${COMMANDS} RUN-DOCKER-EXEC" | sed "s/\s/\n/g" | sort | uniq)"
GETEXECCOMAND="now" ;;
'-p' | '--post' )
COMMANDS="$(echo "${COMMANDS} POST-AGORA" | sed "s/\s/\n/g" | sort | uniq)"
GETTEAM="now" ;;
'-*' ) # ignore
;;
*)
GIVEN_ORGA="${GIVEN_ORGA} ${ARG%-orga}"
;;
esac
fi
done
if [[ "${COMMANDS[*]}" =~ "RESTART-COMPOSE" && "${COMMANDS[*]}" =~ "RESTART-TYPE" ]]; then
echo "Je restarte via docker-compose ou via docker mais pas les deux !"
usage
exit 1
fi
if [ -z "${COMMANDS}" ]; then
usage && exit
fi
if [ -n "${GIVEN_ORGA}" ]; then
# intersection des 2 listes : quelle commande de ouf !!
AVAILABLE_ORGAS=$(comm -12 <(printf '%s\n' ${AVAILABLE_ORGAS} | sort) <(printf '%s\n' ${GIVEN_ORGA} | sort))
fi
NB_ORGAS=$(echo "${AVAILABLE_ORGAS}" | wc -w )
if [[ $NB_ORGAS = 0 && -z "${OPERATE_ON_MAIN}" ]]; then
echo "Aucune orga trouvée."
exit 1
fi
NB_ORGAS_STR="$NB_ORGAS orgas"
[ -n "${OPERATE_ON_MAIN}" ] && NB_ORGAS_STR="$NB_ORGAS_STR + les communs"
_populate_lists # on récupère les clouds / agora / wiki / wp correspondants aux orga
if [[ $NB_ORGAS -gt 2 && "${COMMANDS[*]}" =~ 'INIT' ]]; then
ETLECLOUDCOMMUN=
[ -n "${OPERATE_ON_MAIN}" ] && ETLECLOUDCOMMUN=" ainsi que les containers commun"
echo "On s'apprête à initialiser les ${CONTAINERS_TYPES} suivants : ${AVAILABLE_ORGAS}${ETLECLOUDCOMMUN}"
checkContinue
fi
for COMMAND in ${COMMANDS}; do
case "${COMMAND}" in
'LIST' )
listContainers && exit ;;
'VERSION' )
version && exit ;;
'OPTIMISE-CLOUD' )
OptimiseClouds ;;
'RESTART-COMPOSE' )
restart-compose ;;
'RESTART-DOCKER' )
restart ;;
'UPDATE-CLOUD-APP' )
UpdateApplis ;;
'UPGRADE' )
UpgradeClouds ;;
'INIT' )
Init ;;
'INSTALL-CLOUD-APP' )
InstallApps ;;
'RUN-CLOUD-OCC' )
RunCommands "OCC" "${OCCCOMANDS[@]}" ;;
'RUN-AGORA-MMCTL' )
RunCommands "MMCTL" "${MMCTLCOMANDS[@]}" ;;
'RUN-DOCKER-EXEC' )
RunCommands "EXEC" "${EXECCOMANDS[@]}" ;;
'POST-AGORA' )
PostMessages ${Posts} ;;
esac
done

1151
bin/gestUsers.sh Executable file

File diff suppressed because it is too large Load Diff

77
bin/indicateurs.sh Executable file
View File

@ -0,0 +1,77 @@
#!/bin/bash
# 23/04/2021
# script de mise a jour du fichier de collecte pour future intégration dans la base de donneyyy
# did
KAZ_ROOT=$(cd $(dirname $0)/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
FIC_COLLECTE=${KAZ_STATE_DIR}/collecte.csv
FIC_ACTIVITE_MAILBOX=${KAZ_STATE_DIR}/activites_mailbox.csv
mkdir -p ${KAZ_STATE_DIR}
mkdir -p ${KAZ_STATE_DIR}/metro
#Jirafeau
echo "$(date +%Y-%m-%d-%H-%M-%S);" \
"depot-count;" \
"$(find ${DOCK_VOL}/jirafeau_fileData/_data/files/ -name \*count| wc -l)" >> "${FIC_COLLECTE}"
echo "$(date +%Y-%m-%d-%H-%M-%S);" \
"depot-size;" \
"$(du -ks ${DOCK_VOL}/jirafeau_fileData/_data/files/ | awk -F " " '{print $1}')" >> "${FIC_COLLECTE}"
#PLACE DISQUE sur serveur
echo "$(date +%Y-%m-%d-%H-%M-%S);" \
"disk-system-size-used;" \
"$(df | grep sda | awk -F " " '{print $3}')" >> "${FIC_COLLECTE}"
echo "$(date +%Y-%m-%d-%H-%M-%S);" \
"disk-system-size-used-human;" \
"$(df -h | grep sda | awk -F " " '{print $3}')" >> "${FIC_COLLECTE}"
#nombre de mails kaz:
echo "$(date +%Y-%m-%d-%H-%M-%S);" \
"mailboxes;" \
"$(cat ${KAZ_COMP_DIR}/postfix/config/postfix-accounts.cf | wc -l)" >> "${FIC_COLLECTE}"
#nombre d'alias kaz:
echo "$(date +%Y-%m-%d-%H-%M-%S);" \
"mail_alias;" \
"$(cat ${KAZ_COMP_DIR}/postfix/config/postfix-virtual.cf | wc -l)" >> "${FIC_COLLECTE}"
#Nombre d' orgas
echo "$(date +%Y-%m-%d-%H-%M-%S);" \
"Orgas;" \
"$(ls -l /kaz/dockers/ | grep orga | wc -l)" >> "${FIC_COLLECTE}"
#stats des 2 postfix (mail+sympa)
EXP=$(/usr/bin/hostname -s)
STATS1=$(cat ${DOCK_VOL}/sympa_sympaLog/_data/mail.log.1 | /usr/sbin/pflogsumm)
#docker exec -i mailServ mailx -r $EXP -s "stats Sympa" root <<DEB_MESS
#$STATS1
#DEB_MESS
STATS2=$(cat ${DOCK_VOL}/postfix_mailLog/_data/mail.log | /usr/sbin/pflogsumm)
#docker exec -i mailServ mailx -r $EXP -s "stats Postfix" root <<DEB_MESS
#$STATS2
#DEB_MESS
IFS=''
for line in $(ls -lt --time-style=long-iso "${DOCK_VOL}/postfix_mailData/_data/kaz.bzh/"); do
echo "${line}" | awk '{print $6";"$7";"$8";"$9}' > "${FIC_ACTIVITE_MAILBOX}"
done
#pour pister les fuites mémoires
docker stats --no-stream --format "table {{.Name}}\t{{.Container}}\t{{.MemUsage}}" | sort -k 3 -h > "${KAZ_STATE_DIR}/metro/$(date +"%Y%m%d")_docker_memory_kaz.log"
ps aux --sort -rss > "${KAZ_STATE_DIR}/metro/$(date +"%Y%m%d")_ps_kaz.log"
free -hlt > "${KAZ_STATE_DIR}/metro/$(date +"%Y%m%d")_mem_kaz.log"
systemd-cgls --no-pager > "${KAZ_STATE_DIR}/metro/$(date +"%Y%m%d")_cgls_kaz.log"
for i in $(docker container ls --format "{{.ID}}"); do docker inspect -f '{{.State.Pid}} {{.Name}}' $i; done > "${KAZ_STATE_DIR}/metro/$(date +"%Y%m%d")_dockerpid_kaz.log"
#on piste cette saloperie d'ethercalc
#echo $(date +"%Y%m%d") >> "${KAZ_STATE_DIR}/metro/docker_stats_ethercalc.log"
#docker stats --no-stream ethercalcServ ethercalcDB >> "${KAZ_STATE_DIR}/metro/docker_stats_ethercalc.log"
#fab le 04/10/2022
#taille des dockers
docker system df -v > "${KAZ_STATE_DIR}/metro/$(date +"%Y%m%d")_docker_size_kaz.log"

220
bin/init.sh Executable file
View File

@ -0,0 +1,220 @@
#!/bin/bash
KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
cd "${KAZ_ROOT}"
MY_MAIN_IP=$(ip a | grep "inet " | head -2 | tail -1 | sed "s%.*inet *\([0-9.]*\)/.*%\1%")
MY_SECOND_IP=$(ip a | grep "inet " | head -3 | tail -1 | sed "s%.*inet *\([0-9.]*\)/.*%\1%")
DOMAIN="kaz.local"
DOMAIN_SYMPA="kaz.local"
HTTP_PROTO="https"
MAIN_IP="${MY_MAIN_IP}"
SYMPA_IP="MY_SECOND_IP"
RESTART_POLICY="no"
JIRAFEAU_DIR="/var/jirafeauData/$(apg -n 1 -m 16 -M NCL)/"
DOCKERS_TMPL_ENV="${KAZ_CONF_DIR}/dockers.tmpl.env"
RESET_ENV="true"
if [ -f "${DOCKERS_ENV}" ]; then
DOMAIN=$(getValInFile "${DOCKERS_ENV}" "domain")
DOMAIN_SYMPA=$(getValInFile "${DOCKERS_ENV}" "domain_sympa")
HTTP_PROTO=$(getValInFile "${DOCKERS_ENV}" "httpProto")
MAIN_IP=$(getValInFile "${DOCKERS_ENV}" "MAIN_IP")
SYMPA_IP=$(getValInFile "${DOCKERS_ENV}" "SYMPA_IP")
RESTART_POLICY=$(getValInFile "${DOCKERS_ENV}" "restartPolicy")
JIRAFEAU_DIR=$(getValInFile "${DOCKERS_ENV}" "jirafeauDir")
while : ; do
read -p "Change '${DOCKERS_ENV}'? " resetEnv
case "${resetEnv}" in
[yYoO]* )
break
;;
""|[Nn]* )
RESET_ENV=""
break
;;
* )
echo "Please answer yes no."
;;
esac
done
fi
[ -n "${RESET_ENV}" ] && {
echo "Reset '${DOCKERS_ENV}'"
read -p " * domain (kaz.bzh / dev.kaz.bzh / kaz.local)? [${YELLOW}${DOMAIN}${NC}] " domain
case "${domain}" in
"" )
DOMAIN="${DOMAIN}"
;;
* )
# XXX ne conserver que .-0-9a-z
DOMAIN=$(sed 's/[^a-z0-9.-]//g' <<< "${domain}")
;;
esac
read -p " * lists domain (kaz.bzh / kaz2.ovh / kaz.local)? [${YELLOW}${DOMAIN_SYMPA}${NC}] " domain
case "${domain}" in
"" )
DOMAIN_SYMPA="${DOMAIN_SYMPA}"
;;
* )
DOMAIN_SYMPA="${domain}"
;;
esac
while : ; do
read -p " * protocol (https / http)? [${YELLOW}${HTTP_PROTO}${NC}] " proto
case "${proto}" in
"" )
HTTP_PROTO="${HTTP_PROTO}"
break
;;
"https"|"http" )
HTTP_PROTO="${proto}"
break
;;
* ) echo "Please answer joe, emacs, vim or no."
;;
esac
done
while : ; do
read -p " * main IP (ip)? [${YELLOW}${MAIN_IP}${NC}] " ip
case "${ip}" in
"" )
MAIN_IP="${MAIN_IP}"
break
;;
* )
if testValidIp "${ip}" ; then
MAIN_IP="${ip}"
break
else
echo "Please answer x.x.x.x format."
fi
;;
esac
done
while : ; do
read -p " * lists IP (ip)? [${YELLOW}${SYMPA_IP}${NC}] " ip
case "${ip}" in
"" )
SYMPA_IP="${SYMPA_IP}"
break
;;
* )
if testValidIp "${ip}" ; then
SYMPA_IP="${ip}"
break
else
echo "Please answer x.x.x.x format."
fi
;;
esac
done
while : ; do
read -p " * restart policy (always / unless-stopped / no)? [${YELLOW}${RESTART_POLICY}${NC}] " policy
case "${policy}" in
"" )
RESTART_POLICY="${RESTART_POLICY}"
break
;;
"always"|"unless-stopped"|"no")
RESTART_POLICY="${policy}"
break
;;
* ) echo "Please answer always, unless-stopped or no."
;;
esac
done
while : ; do
read -p " * Jirafeau dir? [${YELLOW}${JIRAFEAU_DIR}${NC}] " jirafeauDir
case "${jirafeauDir}" in
"" )
JIRAFEAU_DIR="${JIRAFEAU_DIR}"
break
;;
* )
if [[ "${jirafeauDir}" =~ ^/var/jirafeauData/[0-9A-Za-z]{1,16}/$ ]]; then
JIRAFEAU_DIR="${jirafeauDir}"
break
else
echo "Please give dir name (/var/jirafeauData/[0-9A-Za-z]{1,3}/)."
fi
;;
esac
done
[ -f "${DOCKERS_ENV}" ] || cp "${DOCKERS_TMPL_ENV}" "${DOCKERS_ENV}"
sed -i "${DOCKERS_ENV}" \
-e "s%^\s*domain\s*=.*$%domain=${DOMAIN}%" \
-e "s%^\s*domain_sympa\s*=.*$%domain_sympa=${DOMAIN_SYMPA}%" \
-e "s%^\s*httpProto\s*=.*$%httpProto=${HTTP_PROTO}%" \
-e "s%^\s*MAIN_IP\s*=.*$%MAIN_IP=${MAIN_IP}%" \
-e "s%^\s*SYMPA_IP\s*=.*$%SYMPA_IP=${SYMPA_IP}%" \
-e "s%^\s*restartPolicy\s*=.*$%restartPolicy=${RESTART_POLICY}%" \
-e "s%^\s*ldapRoot\s*=.*$%ldapRoot=dc=${DOMAIN_SYMPA/\./,dc=}%" \
-e "s%^\s*jirafeauDir\s*=.*$%jirafeauDir=${JIRAFEAU_DIR}%"
}
if [ ! -f "${KAZ_CONF_DIR}/container-mail.list" ]; then
cat > "${KAZ_CONF_DIR}/container-mail.list" <<EOF
# e-mail server composer
postfix
ldap
#sympa
EOF
fi
if [ ! -f "${KAZ_CONF_DIR}/container-orga.list" ]; then
cat > "${KAZ_CONF_DIR}/container-orga.list" <<EOF
# orga composer
EOF
fi
if [ ! -f "${KAZ_CONF_DIR}/container-proxy.list" ]; then
cat > "${KAZ_CONF_DIR}/container-proxy.list" <<EOF
proxy
EOF
fi
if [ ! -f "${KAZ_CONF_DIR}/container-withMail.list" ]; then
cat > "${KAZ_CONF_DIR}/container-withMail.list" <<EOF
web
etherpad
roundcube
framadate
paheko
dokuwiki
gitea
mattermost
cloud
EOF
fi
if [ ! -f "${KAZ_CONF_DIR}/container-withoutMail.list" ]; then
cat > "${KAZ_CONF_DIR}/container-withoutMail.list" <<EOF
jirafeau
ethercalc
collabora
#vigilo
#grav
EOF
fi
if [ ! -d "${KAZ_ROOT}/secret" ]; then
rsync -a "${KAZ_ROOT}/secret.tmpl/" "${KAZ_ROOT}/secret/"
. "${KAZ_ROOT}/secret/SetAllPass.sh"
"${KAZ_BIN_DIR}/secretGen.sh"
"${KAZ_BIN_DIR}/updateDockerPassword.sh"
fi

144
bin/install.sh Executable file
View File

@ -0,0 +1,144 @@
#!/bin/bash
set -e
# on pourra inclure le fichier dockers.env pour
# gérer l' environnement DEV, PROD ou LOCAL
KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
export VAGRANT_SRC_DIR=/vagrant/files
cd "${KAZ_ROOT}"
if [ ! -f "${KAZ_ROOT}/config/dockers.env" ]; then
printKazError "dockers.env not found"
exit 1
fi
for type in mail orga proxy withMail withoutMail ; do
if [ ! -f "${KAZ_ROOT}/config/container-${type}.list" ]; then
printKazError "container-${type}.list not found"
exit 1
fi
done
mkdir -p "${KAZ_ROOT}/log/"
export DebugLog="${KAZ_ROOT}/log/log-install-$(date +%y-%m-%d-%T)-"
(
declare -a DOCKERS_LIST NEW_SERVICE
# dockers à démarrer (manque : sympa, wordpress, orga)
DOCKERS_LIST+=($(getList "${KAZ_CONF_DIR}/container-withoutMail.list"))
DOCKERS_LIST+=($(getList "${KAZ_CONF_DIR}/container-proxy.list"))
DOCKERS_LIST+=($(getList "${KAZ_CONF_DIR}/container-mail.list"))
DOCKERS_LIST+=($(getList "${KAZ_CONF_DIR}/container-withMail.list"))
# web proxy postfix sympa roundcube jirafeau ldap quotas cachet ethercalc etherpad framadate paheko dokuwiki gitea mattermost cloud collabora
# 8080 443 8081 8082 8083 8084 8085 8086 8087 8088 8089 8090 8091 8092 8093 8094
# pour ne tester qu'un sous-ensemble de service
if [ $# -ne 0 ]; then
case $1 in
-h*|--h*)
echo $(basename "$0") " [-h] [-help] ([1-9]* | {service...})"
echo " -h"
echo " -help Display this help."
echo " service.. service to enable"
echo " [1-9]* level of predefined services set selection"
exit
;;
0)
echo $(basename "$0"): " level '0' not defined"
exit
;;
[0-9]*)
for level in $(seq 1 $1); do
case ${level} in
1) NEW_SERVICE+=("web" "proxy");;
2) NEW_SERVICE+=("postfix");;
3) NEW_SERVICE+=("roundcube");;
4) NEW_SERVICE+=("sympa");;
5) NEW_SERVICE+=("jirafeau");;
6) NEW_SERVICE+=("ldap");;
7) NEW_SERVICE+=("quotas");;
8) NEW_SERVICE+=("cachet");;
9) NEW_SERVICE+=("ethercalc");;
10) NEW_SERVICE+=("etherpad");;
11) NEW_SERVICE+=("framadate");;
12) NEW_SERVICE+=("paheko");;
13) NEW_SERVICE+=("dokuwiki");;
14) NEW_SERVICE+=("gitea");;
15) NEW_SERVICE+=("mattermost");;
16) NEW_SERVICE+=("collabora");;
17) NEW_SERVICE+=("cloud");;
*)
echo $(basename "$0"): " level '${level}' not defined"
exit
;;
esac
done
DOCKERS_LIST=(${NEW_SERVICE[@]})
printKazMsg "level $1"
;;
*)
# XXX il manque l'extention des noms (jir va fair le start de jirafeau mais pas le download et le first)
DOCKERS_LIST=($*)
;;
esac
fi
DOCKERS_LIST=($(filterAvailableComposes ${DOCKERS_LIST[*]}))
printKazMsg "dockers: ${DOCKERS_LIST[*]}"
# on pré-télécharge à l'origine Vagrant (jirafeau...)
mkdir -p "${KAZ_ROOT}/git" "${KAZ_ROOT}/download"
for DOCKER in ${DOCKERS_LIST[@]}; do
if [ -f "${KAZ_ROOT}/dockers/${DOCKER}/download.sh" ]; then
cd "${KAZ_ROOT}/dockers/${DOCKER}"
./download.sh
fi
done
# on pré-télécharge le dépollueur
if [[ " ${DOCKERS_LIST[*]} " =~ " "(jirafeau|postfix|sympa)" " ]]; then
"${KAZ_BIN_DIR}/installDepollueur.sh"
docker volume create filterConfig
fi
# on sauve les pré-téléchargement pour le prochain lancement de Vagrant
[ -d "${VAGRANT_SRC_DIR}/kaz/download" ] &&
rsync -a "${KAZ_ROOT}/download/" "${VAGRANT_SRC_DIR}/kaz/download/"
[ -d "${VAGRANT_SRC_DIR}/kaz/git" ] &&
rsync -a "${KAZ_ROOT}/git/" "${VAGRANT_SRC_DIR}/kaz/git/"
# on construit les dockers qui contiennent un script de création (etherpad, framadate, jirafeau...)
for DOCKER in ${DOCKERS_LIST[@]}; do
if [ -f "${KAZ_ROOT}/dockers/${DOCKER}/build.sh" ]; then
cd "${KAZ_ROOT}/dockers/${DOCKER}"
./build.sh
fi
done
# on démare les containers de la liste uniquement (en une fois par cohérence de proxy)
# "${KAZ_ROOT}/bin/container.sh" stop ${DOCKERS_LIST[*]}
"${KAZ_ROOT}/bin/container.sh" start ${DOCKERS_LIST[*]}
if [[ " ${DOCKERS_LIST[*]} " =~ " etherpad " ]]; then
# pb avec la lanteur de démarrage du pad :-(
sleep 5
"${KAZ_ROOT}/bin/container.sh" start etherpad
fi
if [[ " ${DOCKERS_LIST[*]} " =~ " jirafeau " ]]; then
# pb avec la lanteur de démarrage du jirafeau :-(
(cd "${KAZ_COMP_DIR}/jirafeau" ; docker-compose restart)
fi
# on construit les dockers qui contiennent un script de création (etherpad, framadate, jirafeau...)
for DOCKER in ${DOCKERS_LIST[@]}; do
if [ -f "${KAZ_ROOT}/dockers/${DOCKER}/first.sh" ]; then
cd "${KAZ_ROOT}/dockers/${DOCKER}"
./first.sh
fi
done
echo "########## ********** End install $(date +%D-%T)"
) > >(tee ${DebugLog}stdout.log) 2> >(tee ${DebugLog}stderr.log >&2)

27
bin/installDepollueur.sh Executable file
View File

@ -0,0 +1,27 @@
#!/bin/bash
SRC_DEP=https://git.kaz.bzh/KAZ/depollueur.git
KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
if [[ -f "${KAZ_GIT_DIR}/depollueur/test-running" ]]; then
exit
fi
printKazMsg "\n *** Installation du dépollueur"
sudo apt-get install -y --fix-missing build-essential make g++ libboost-program-options-dev libboost-system-dev libboost-filesystem-dev libcurl4-gnutls-dev libssl-dev
mkdir -p "${KAZ_GIT_DIR}"
cd "${KAZ_GIT_DIR}"
if [ ! -d "depollueur" ]; then
git clone "${SRC_DEP}"
fi
cd depollueur
git reset --hard && git pull
make
. "${DOCKERS_ENV}"
echo "${domain}" > "src/bash/domainname"

166
bin/interoPaheko.sh Executable file
View File

@ -0,0 +1,166 @@
#!/bin/bash
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
KAZ_ROOT=/kaz
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
URL_PAHEKO="$httpProto://${paheko_API_USER}:${paheko_API_PASSWORD}@kaz-paheko.$(echo $domain)"
PRG=$(basename $0)
RACINE=$(echo $PRG | awk '{print $1}')
TFILE_INT_PAHEKO_ACTION=$(mktemp /tmp/XXXXXXXX_INT_PAHEKO_ACTION.json)
TFILE_INT_PAHEKO_IDFILE=$(mktemp /tmp/XXXXXXXX_TFILE_INT_PAHEKO_IDFILE.json)
FILE_CREATEUSER="$KAZ_ROOT/tmp/createUser.txt"
sep=' '
#trap "rm -f ${TFILE_INT_PAHEKO_IDFILE} ${TFILE_INT_PAHEKO_ACTION} " 0 1 2 3 15
############################################ Fonctions #######################################################
TEXTE="
# -- fichier de création des comptes KAZ
# --
# -- 1 ligne par compte
# -- champs séparés par ;. les espaces en début et en fin sont enlevés
# -- laisser vide si pas de donnée
# -- pas d'espace dans les variables
# --
# -- ORGA: nom de l'organisation (max 15 car), vide sinon
# -- ADMIN_ORGA: O/N indique si le user est admin de l'orga (va le créer comme admin du NC de l'orga et admin de l'équipe agora)
# -- NC_ORGA: O/N indique si l'orga a demandé un NC
# -- PAHEKO_ORGA: O/N indique si l'orga a demandé un paheko
# -- WP_ORGA: O/N indique si l'orga a demandé un wp
# -- AGORA_ORGA: O/N indique si l'orga a demandé un mattermost
# -- WIKI_ORGA: O/N indique si l'orga a demandé un wiki
# -- NC_BASE: O/N indique si le user doit être inscrit dans le NC de base
# -- GROUPE_NC_BASE: soit null soit le groupe dans le NC de base
# -- EQUIPE_AGORA: soit null soit equipe agora (max 15 car)
# -- QUOTA=(1/10/20/...) en GB
# --
# NOM ; PRENOM ; EMAIL_SOUHAITE ; EMAIL_SECOURS ; ORGA ; ADMIN_ORGA ; NC_ORGA ; PAHEKO_ORGA ; WP_ORGA ; AGORA_ORGA ; WIKI_ORGA ; NC_BASE ; GROUPE_NC_BASE ; EQUIPE_AGORA ; QUOTA
#
# exemple pour un compte découverte:
# dupont ; jean-louis; jean-louis.dupont@kaz.bzh ; gregomondo@kaz.bzh; ; N; N; N; N; N; N; O; ; ;1
#
# exemple pour un compte asso de l'orga gogol avec le service dédié NC uniquement + une équipe dans l'agora
# dupont ; jean-louis; jean-louis.dupont@kaz.bzh ; gregomondo@kaz.bzh; gogol ; O; O; N; N; N; N;N;;gogol_team; 10
"
Int_paheko_Action() {
# $1 est une action;
ACTION=$1
OPTION=$2
# on envoie la requête sur le serveur paheko avec la clause à créer
# problème de gestion de remontée de données dans la table services_users quand le compte a plus de 2 activités
#curl -s ${URL_PAHEKO}/api/sql -d "SELECT * from users cross join services_users on users.id = services_users.id_user where users.action_auto='${ACTION}';" >>${TFILE_INT_PAHEKO_ACTION}
curl -s ${URL_PAHEKO}/api/sql -d "SELECT * from users where action_auto='${ACTION}';" >>${TFILE_INT_PAHEKO_ACTION}
[ ! -z ${TFILE_INT_PAHEKO_ACTION} ] || { echo "probleme de fichier ${TFILE_INT_PAHEKO_ACTION}" ; exit 1;}
REP_ID=$(jq -c '.results[].id ' ${TFILE_INT_PAHEKO_ACTION} 2>/dev/null)
if [ ! -z "${REP_ID}" ]
then
[ "$OPTION" = "silence" ] || echo -e "${RED}Nombre de compte ${ACTION} ${NC}= ${GREEN} $(echo ${REP_ID} | wc -w) ${NC}"
if [ -f "$FILE_CREATEUSER" ]
then
mv $FILE_CREATEUSER $FILE_CREATEUSER.$(date +%d-%m-%Y-%H:%M:%S)
fi
echo "# -------- Fichier généré le $(date +%d-%m-%Y-%H:%M:%S) ----------">${FILE_CREATEUSER}
echo "${TEXTE}" >>${FILE_CREATEUSER}
for VAL_ID in ${REP_ID}
do
jq -c --argjson val "${VAL_ID}" '.results[] | select (.id == $val)' ${TFILE_INT_PAHEKO_ACTION} > ${TFILE_INT_PAHEKO_IDFILE}
for VAL_GAR in id_category action_auto nom email email_secours quota_disque admin_orga nom_orga responsable_organisation responsable_email agora cloud wordpress garradin docuwiki id_service
do
eval $VAL_GAR=$(jq .$VAL_GAR ${TFILE_INT_PAHEKO_IDFILE})
done
#comme tout va bien on continue
#on compte le nom de champs dans la zone nom pour gérer les noms et prénoms composés
# si il y a 3 champs, on associe les 2 premieres valeurs avec un - et on laisse le 3ème identique
# si il y a 4 champs on associe les 1 et le 2 avec un tiret et le 3 et 4 avec un tiret
# on met les champs nom_ok et prenom_ok à blanc
nom_ok=""
prenom_ok=""
# on regarde si le nom de l' orga est renseigné ou si le nom de l' orga est null et l' activité de membre est 7 (membre rattaché)
# si c' est le cas alors le nom est le nom de l' orga et le prénom est forcé à la valeur Organisation
if [[ "$nom_orga" = null ]] || [[ "$nom_orga" != null && "$id_service" = "7" ]]
then
[ "$OPTION" = "silence" ] || echo -e "${NC}Abonné ${GREEN}${nom}${NC}"
#si lactivité est membre rattaché on affiche a quelle orga il est rattaché
if [ "$id_service" = "7" ] && [ "$OPTION" != "silence" ] && [ "$nom_orga" != null ]
then
echo -e "${NC}Orga Rattachée : ${GREEN}${nom_orga}${NC}"
fi
COMPTE_NOM=$(echo $nom | awk -F' ' '{for (i=1; i != NF; i++); print i;}')
case "${COMPTE_NOM}" in
0|1)
echo "Il faut corriger le champ nom (il manque un nom ou prénom) de paheko"
echo "je quitte et supprime le fichier ${FILE_CREATEUSER}"
rm -f $FILE_CREATEUSER
exit 2
;;
2)
nom_ok=$(echo $nom | awk -F' ' '{print $1}')
prenom_ok=$(echo $nom | awk -F' ' '{print $2}')
;;
*)
nom_ok=
prenom_ok=
for i in ${nom}; do grep -q '^[A-Z]*$' <<<"${i}" && nom_ok="${nom_ok}${sep}${i}" || prenom_ok="${prenom_ok}${sep}${i}"; done
nom_ok="${nom_ok#${sep}}"
prenom_ok="${prenom_ok#${sep}}"
if [ -z "${nom_ok}" ] || [ -z "${prenom_ok}" ]; then
echo "Il faut corriger le champ nom (peut être un nom de famille avec une particule ?) de paheko"
echo "je quitte et supprime le fichier ${FILE_CREATEUSER}"
rm -f $FILE_CREATEUSER
exit
fi
esac
# comme l' orga est à null nom orga est a vide, pas d' admin orga, on met dans l' agora générale
# pas d' équipe agora et de groupe nextcloud spécifique
nom_orga=" "
admin_orga="N"
nc_base="O"
equipe_agora=" "
groupe_nc_base=" "
else
# L' orga est renseigné dans paheko donc les nom et prenoms sont forcé a nom_orga et Organisation
# un équipe agora portera le nom de l' orga, le compte ne sera pas créé dans le nextcloud général
# et le compte est admin de son orga
nom_orga=$(echo $nom_orga | tr [:upper:] [:lower:])
[ "$OPTION" = "silence" ] || echo -e "${NC}Orga : ${GREEN}${nom_orga}${NC}"
nom_ok=$nom_orga
# test des caractères autorisés dans le nom d' orga: lettres, chiffres et/ou le tiret
if ! [[ "${nom_ok}" =~ ^[[:alnum:]-]+$ ]]; then
echo "Erreur : l' orga doit être avec des lettres et/ou des chiffres. Le séparateur doit être le tiret"
rm -f $FILE_CREATEUSER<EFBFBD>
exit 2
fi
prenom_ok=organisation
equipe_agora=$nom_orga
groupe_nc_base=" "
nc_base="N"
admin_orga="O"
fi
# Pour le reste on renomme les null en N ( non ) et les valeurs 1 en O ( Oui)
cloud=$(echo $cloud | sed -e 's/0/N/g' | sed -e 's/1/O/g')
paheko=$(echo $garradin | sed -e 's/0/N/g' | sed -e 's/1/O/g')
wordpress=$(echo $wordpress | sed -e 's/0/N/g' | sed -e 's/1/O/g')
agora=$(echo $agora | sed -e 's/0/N/g' | sed -e 's/1/O/g')
docuwiki=$(echo $docuwiki | sed -e 's/0/N/g' | sed -e 's/1/O/g')
# et enfin on écrit dans le fichier
echo "$nom_ok;$prenom_ok;$email;$email_secours;$nom_orga;$admin_orga;$cloud;$paheko;$wordpress;$agora;$docuwiki;$nc_base;$groupe_nc_base;$equipe_agora;$quota_disque">>${FILE_CREATEUSER}
done
else
echo "Rien à créer"
exit 2
fi
}
#Int_paheko_Action "A créer" "silence"
Int_paheko_Action "A créer"
exit 0

14
bin/iptables.sh Executable file
View File

@ -0,0 +1,14 @@
#!/bin/bash
#cleaning, may throw errors at first launch
#iptables -t nat -D POSTROUTING -o ens18 -j ipbis
#iptables -t nat -F ipbis
#iptables -t nat -X ipbis
iptables -t nat -N ipbis
iptables -t nat -F ipbis
iptables -t nat -I ipbis -o ens18 -p tcp --source `docker inspect -f '{{.NetworkSettings.Networks.sympaNet.IPAddress}}' sympaServ` -j SNAT --to `ifconfig ens18:0 | grep "inet" | awk '{print $2}'`
iptables -t nat -I ipbis -o ens18 -p tcp --source `docker inspect -f '{{.NetworkSettings.Networks.jirafeauNet.IPAddress}}' sympaServ` -j SNAT --to `ifconfig ens18:0 | grep "inet" | awk '{print $2}'`
iptables -t nat -A ipbis -j RETURN
iptables -t nat -D POSTROUTING -o ens18 -j ipbis
iptables -t nat -I POSTROUTING -o ens18 -j ipbis

110
bin/kazDockerNet.sh Executable file
View File

@ -0,0 +1,110 @@
#!/bin/bash
# faire un completion avec les composant dispo
PRG=$(basename $0)
KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
usage () {
echo "Usage: ${PRG} [-n] [-h] list|add [netName]..."
echo " -n : simulation"
echo " -h|--help : help"
echo
echo " create all net : ${PRG} add $(${KAZ_BIN_DIR}/kazList.sh compose validate)"
exit 1
}
allNetName=""
export CMD=""
for ARG in $@; do
case "${ARG}" in
'-h' | '-help' )
usage
;;
'-n' )
shift
export SIMU="echo"
;;
-*)
usage
;;
list|add)
CMD="${ARG}"
shift;
;;
*)
allNetName="${allNetName} ${ARG}"
shift
;;
esac
done
if [ -z "${CMD}" ] ; then
usage
fi
# running composes
export allBridgeName="$(docker network list | grep bridge | awk '{print $2}')"
# running network
export allBridgeNet=$(for net in ${allBridgeName} ; do docker inspect ${net} | grep Subnet | sed 's#.*"\([0-9]*\.[0-9]*\.[0-9]*\.[0-9]*/[0-9]*\)".*# \1#'; done)
minB=0
minC=0
minD=0
getNet() {
netName="$1Net"
if [[ "${allBridgeName}" =~ "${netName}" ]]; then
echo "${netName} already created"
return
fi
# echo "start 10.${minB}.${minC}.$((${minD}*16))"
find=""
for b in $(eval echo {${minB}..255}); do
for c in $(eval echo {${minC}..255}); do
for d in $(eval echo {${minD}..15}); do
if [ ! -z "${find}" ]; then
minB=${b}
minC=${c}
minD=${d}
return
fi
# to try
subnet="10.${b}.${c}.$((d*16))"
if [[ "${allBridgeNet}" =~ " ${subnet}/" ]];
then
# used
# XXX check netmask
continue
fi
# the winner is...
echo "${netName} => ${subnet}/28"
${SIMU} docker network create --subnet "${subnet}/28" "${netName}"
find="ok"
done
minD=0
done
minC=0
done
}
list () {
echo "name: " ${allBridgeName}
echo "net: " ${allBridgeNet}
}
add () {
if [ -z "${allNetName}" ] ; then
usage
fi
for netName in ${allNetName}; do
getNet "${netName}"
done
}
"${CMD}"

181
bin/kazList.sh Executable file
View File

@ -0,0 +1,181 @@
#!/bin/bash
PRG=$(basename $0)
SIMU=""
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
. "${DOCKERS_ENV}"
cd "$(dirname $0)"
ALL_STATUS=$(docker ps -a --format "{{.ID}} {{.Names}} {{.Status}}")
SERVICES_CHOICE="$(getAvailableServices | tr "\n" "|")"
SERVICES_CHOICE="${SERVICES_CHOICE%|}"
usage () {
echo "${RED}${BOLD}" \
"Usage: $0 [-h] {compose|orga|service} {available|validate|enable|disable|status} [names]...${NL}" \
" -h help${NL}" \
" compose {available|validate|enable|disable} : list docker-compose name${NL}" \
" compose status : status of docker-compose (default available)${NL}" \
" service {available|validate} : list services name${NL}" \
" service {enable|disable} : list services in orga${NL}" \
" service status : status of services in orga${NL}" \
" service {${SERVICES_CHOICE}}${NL}" \
" : list of orga enable a service ${NL}" \
" [compose] in${NL}" \
" ${CYAN}$((getAvailableComposes;getAvailableOrgas) | tr "\n" " ")${NC}${NL}"
exit 1
}
# ========================================
compose_available () {
echo $*
}
getComposeEnableByProxy () {
onList=$(
for type in ${KAZ_CONF_DIR}/container-*.list ; do
getList "${type}"
done)
local compose
for compose in ${onList} ; do
composeFlag="proxy_${compose//-/_}"
[[ "${!composeFlag}" == "on" ]] && echo ${compose}
done
}
compose_validate () {
echo $(
for type in ${KAZ_CONF_DIR}/container-*.list ; do
getList "${type}"
done | filterInList $*)
}
compose_enable () {
echo $(getComposeEnableByProxy | filterInList $*)
}
compose_disable () {
echo $(getAvailableComposes | filterNotInList $(getComposeEnableByProxy) | filterInList $*)
}
compose_status () {
for compose in $*; do
cd "${KAZ_COMP_DIR}/${compose}"
echo "${compose}:"
for service in $(docker-compose ps --services 2>/dev/null); do
id=$(docker-compose ps -q "${service}" | cut -c 1-12)
if [ -z "${id}" ]; then
echo " - ${RED}${BOLD}[Down]${NC} ${service}"
else
status=$(grep "^${id}\b" <<< "${ALL_STATUS}" | sed "s/.*${id}\s\s*\S*\s\s*\(\S*.*\)/\1/")
COLOR=$([[ "${status}" =~ Up ]] && echo "${GREEN}" || echo "${RED}")
echo " - ${COLOR}${BOLD}[${status}]${NC} ${service}"
fi
done
done
}
# ========================================
service_available () {
echo $(getAvailableServices)
}
service_validate () {
echo $(getAvailableServices)
}
getServiceInOrga () {
for orga in $*; do
[[ "${orga}" = *"-orga" ]] || continue
local ORGA_DIR="${KAZ_COMP_DIR}/${orga}"
ORGA_COMPOSE="${ORGA_DIR}/docker-compose.yml"
[[ -f "${ORGA_COMPOSE}" ]] || continue
for service in $(getAvailableServices); do
case "${service}" in
paheko)
[ -f "${ORGA_DIR}/usePaheko" ] && echo "${service}"
;;
wiki)
grep -q "\s*dokuwiki:" "${ORGA_COMPOSE}" 2>/dev/null && echo "${service}"
;;
wp)
grep -q "\s*wordpress:" "${ORGA_COMPOSE}" 2>/dev/null && echo "${service}"
;;
*)
grep -q "\s*${service}:" "${ORGA_COMPOSE}" 2>/dev/null && echo "${service}"
esac
done
done
}
getOrgaWithService() {
service="$1"
shift
case "${service}" in
wiki) keyword="dokuwiki" ;;
wp) keyword="wordpress" ;;
*) keyword="${service}" ;;
esac
for orga in $*; do
[[ "${orga}" = *"-orga" ]] || continue
local ORGA_DIR="${KAZ_COMP_DIR}/${orga}"
ORGA_COMPOSE="${ORGA_DIR}/docker-compose.yml"
[[ -f "${ORGA_COMPOSE}" ]] || continue
if [ "${service}" = "paheko" ]; then
[ -f "${ORGA_DIR}/usePaheko" ] && echo "${orga}"
else
grep -q "\s*${keyword}:" "${ORGA_COMPOSE}" 2>/dev/null && echo "${orga}"
fi
done
}
service_enable () {
echo $(getServiceInOrga $* | sort -u)
}
service_disable () {
echo $(getAvailableServices | filterNotInList $(getServiceInOrga $*))
}
service_status () {
# ps per enable
echo "*** TODO ***"
}
# ========================================
KAZ_CMD=""
case "$1" in
'-h' | '-help' )
usage
;;
compose|service)
KAZ_CMD="$1"
shift
;;
*)
usage
;;
esac
KAZ_OPT=""
case "$1" in
available|validate|enable|disable|status)
KAZ_OPT="$1"
shift
;;
*)
if [ "${KAZ_CMD}" = "service" ] && [[ $1 =~ ^(${SERVICES_CHOICE})$ ]]; then
KAZ_OPT="$1"
shift
getOrgaWithService "${KAZ_OPT}" $(filterAvailableComposes $*)
exit
fi
usage
;;
esac
${KAZ_CMD}_${KAZ_OPT} $(filterAvailableComposes $*)

12
bin/ldap/ldap_sauve.sh Executable file
View File

@ -0,0 +1,12 @@
#!/bin/bash
KAZ_ROOT=/kaz
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
FILE_LDIF=/home/sauve/ldap.ldif
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
docker exec -u 0 -i ${ldapServName} slapcat -F /opt/bitnami/openldap/etc/slapd.d -b ${ldap_root} | gzip >${FILE_LDIF}.gz

23
bin/ldap/ldapvi.sh Executable file
View File

@ -0,0 +1,23 @@
#!/bin/bash
KAZ_ROOT=/kaz
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
LDAP_IP=$(docker inspect -f '{{.NetworkSettings.Networks.ldapNet.IPAddress}}' ldapServ)
read -p "quel éditeur ? [vi] " EDITOR
EDITOR=${EDITOR:-vi}
# if [ ${EDITOR} = 'emacs' ]; then
# echo "ALERTE ALERTE !!! quelqu'un a voulu utiliser emacs :) :) :)"
# exit
# fi
EDITOR=${EDITOR:-vi}
export EDITOR=${EDITOR}
ldapvi -h $LDAP_IP -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -w ${ldap_LDAP_ADMIN_PASSWORD} --discover

222
bin/ldap/migrate_to_ldap.sh Executable file
View File

@ -0,0 +1,222 @@
#!/bin/bash
echo "ATTENTION ! Il ne faut plus utiliser ce script, il est probable qu'il commence à mettre la grouille avec le LDAP qui vit sa vie..."
exit 1
KAZ_ROOT=/kaz
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
ACCOUNTS=/kaz/dockers/postfix/config/postfix-accounts.cf
LDAP_IP=$(docker inspect -f '{{.NetworkSettings.Networks.ldapNet.IPAddress}}' ldapServ)
URL_GARRADIN="$httpProto://${paheko_API_USER}:${paheko_API_PASSWORD}@kaz-paheko.$(echo $domain)"
# docker exec -i nextcloudDB mysql --user=${nextcloud_MYSQL_USER} --password=${nextcloud_MYSQL_PASSWORD} ${nextcloud_MYSQL_DATABASE} <<< "select * from oc_accounts;" > /tmp/oc_accounts
ERRORS="/tmp/ldap-errors.log"
> ${ERRORS}
mkdir -p /tmp/ldap/
# curl -s ${URL_GARRADIN}/api/sql -d "SELECT * from membres where emails_rattaches LIKE '%mailrattache%';"
for line in `cat ${ACCOUNTS}`
do
mail=$(echo $line | awk -F '|' '{print $1}')
user=$(echo $mail | awk -F '@' '{print $1}')
domain=$(echo $mail | awk -F '@' '{print $2}')
pass=$(echo $line | awk -F '|' '{print $2}' | sed -e "s/SHA512-//")
IDENT_KAZ=
if [ ${mode} = "prod" ]; then
ficheGarradin=$(curl -s ${URL_GARRADIN}/api/sql -d "SELECT * from membres where email='${mail}';")
mailDeSecours=$(echo ${ficheGarradin} | jq .results[0].email_secours | sed -e "s/\"//g")
quota=$(echo ${ficheGarradin} | jq .results[0].quota_disque | sed -e "s/\"//g")
nom=$(echo ${ficheGarradin} | jq .results[0].nom | sed -e "s/\"//g")
nom_orga=$(echo ${ficheGarradin} | jq .results[0].nom_orga | sed -e "s/\"//g")
else
mailDeSecours=${mail}
quota=1
nom=${mail}
nom_orga="null"
fi
if [ "${quota}" = "null" ]; then
quota=1
fi
# nextcloudEnabled=MAYBE
# IDENT_KAZ=$(grep -i \"${mail}\" /tmp/oc_accounts | cut -f1)
#
# if [ ! -z "${IDENT_KAZ}" ]; then # ident Kaz trouvé avec le mail Kaz
# nextcloudEnabled=TRUE
# else # pas trouvé avec le mail Kaz
# if [ "${nom_orga}" != "null" ]; then # c'est une orga, pas de NC
# IDENT_KAZ="null"
# nextcloudEnabled=FALSE
# else # pas trouvé avec le mail Kaz, pas une orga, on retente avec le mail de secours
# IDENT_KAZ=$(grep -i \"${mailDeSecours}\" /tmp/oc_accounts | cut -f1 | head -n1)
# if [ ! -z "${IDENT_KAZ}" ]; then # on a trouvé l'ident kaz chez NC avec le mail de secours
# nextcloudEnabled=TRUE
# else # pas trouvé avec le mail Kaz, pas une orga, pas trouvé avec le mail de secours
# ficheRattache=$(curl -s ${URL_GARRADIN}/api/sql -d "SELECT * from membres where emails_rattaches LIKE '%${mail}%';" | jq ".results | length")
# if [ $ficheRattache != "0" ]; then # c'est un mail rattaché, pas de NC c'est normal
# IDENT_KAZ="null"
# nextcloudEnabled=FALSE
# else # pas trouvé, pas une orga, pas mail rattaché donc souci
# echo "Pas trouvé l'identifiant Kaz nextcloud pour ${mail} / ${mailDeSecours}, on désactive nextcloud pour ce compte" >> ${ERRORS}
# IDENT_KAZ="null"
# nextcloudEnabled=FALSE
# fi
# fi
# fi
# fi
echo -e "\n\ndn: cn=${mail},ou=users,${ldap_root}\n\
changeType: add\n\
objectClass: inetOrgPerson\n\
sn: ${nom}\n\
userPassword: ${pass}\n\
\n\n\
dn: cn=${mail},ou=users,${ldap_root}\n\
changeType: modify\n\
replace: objectClass\n\
objectClass: inetOrgPerson\n\
objectClass: kaznaute\n\
objectClass: PostfixBookMailAccount\n\
objectClass: nextcloudAccount\n\
-\n\
replace: sn\n\
sn: ${nom}\n\
-\n\
replace: mail\n\
mail: ${mail}\n\
-\n\
replace: mailEnabled\n\
mailEnabled: TRUE\n\
-\n\
replace: mailGidNumber\n\
mailGidNumber: 5000\n\
-\n\
replace: mailHomeDirectory\n\
mailHomeDirectory: /var/mail/${domain}/${user}/\n\
-\n\
replace: mailQuota\n\
mailQuota: ${quota}G\n\
-\n\
replace: mailStorageDirectory\n\
mailStorageDirectory: maildir:/var/mail/${domain}/${user}/\n\
-\n\
replace: mailUidNumber\n\
mailUidNumber: 5000\n\
-\n\
replace: nextcloudQuota\n\
nextcloudQuota: ${quota} GB\n\
-\n\
replace: mailDeSecours\n\
mailDeSecours: ${mailDeSecours}\n\
-\n\
replace: quota\n\
quota: ${quota}\n\
-\n\
replace: agoraEnabled\n\
agoraEnabled: TRUE\n\
-\n\
replace: mobilizonEnabled\n\
mobilizonEnabled: TRUE\n\n" | tee /tmp/ldap/${mail}.ldif | ldapmodify -c -H ldap://${LDAP_IP} -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -x -w ${ldap_LDAP_ADMIN_PASSWORD}
done
#replace: nextcloudEnabled\n\
#nextcloudEnabled: ${nextcloudEnabled}\n\
#-\n\
#replace: identifiantKaz\n\
#identifiantKaz: ${IDENT_KAZ}\n\
#-\n\
OLDIFS=${IFS}
IFS=$'\n'
# ALIASES est le fichier d'entrée
ALIASES="/kaz/dockers/postfix/config/postfix-virtual.cf"
# ALIASES_WITHLDAP est le fichier de sortie des forwards qu'on ne met pas dans le ldap
ALIASES_WITHLDAP="/kaz/dockers/postfix/config/postfix-virtual-withldap.cf"
# On vide le fichier de sortie avant de commencer
> ${ALIASES_WITHLDAP}
for line in `cat ${ALIASES}`
do
echo "Virtual line is $line"
if [ `grep -v "," <<< $line` ]
then
echo "Alias simple"
mail=$(echo $line | awk -F '[[:space:]]*' '{print $2}')
if [ `grep $mail ${ACCOUNTS}` ]
then
echo "Alias vers un mail local, go ldap"
LIST=""
for alias in `grep ${mail} ${ALIASES} | grep -v "," | cut -d' ' -f1`
do
LIST=${LIST}"mailAlias: $alias\n"
done
echo -e "dn: cn=${mail},ou=users,${ldap_root}\n\
changeType: modify
replace: mailAlias\n\
$LIST\n\n" | ldapmodify -c -H ldap://${LDAP_IP} -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -x -w ${ldap_LDAP_ADMIN_PASSWORD}
else
echo "Alias vers un mail externe, go fichier"
echo $line >> ${ALIASES_WITHLDAP}
echo " + intégration LDAP"
src=$(echo $line | awk -F '[[:space:]]*' '{print $1}')
dst=$(echo $line | awk -F '[[:space:]]*' '{print $2}')
echo -e "\n\ndn: cn=${src},ou=mailForwardings,${ldap_root}\n\
changeType: add\n\
objectClass: organizationalRole\n\
\n\n\
dn: cn=${src},ou=mailForwardings,${ldap_root}\n\
changeType: modify\n\
replace: objectClass\n\
objectClass: organizationalRole\n\
objectClass: PostfixBookMailForward\n\
-\n\
replace: mailAlias\n\
mailAlias: ${src}\n\
-\n\
replace: mail\n\
mail: ${dst}\n\n" | ldapmodify -c -H ldap://${LDAP_IP} -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -x -w ${ldap_LDAP_ADMIN_PASSWORD}
fi
else
echo "Forward vers plusieurs adresses, on met dans le fichier"
echo $line >> ${ALIASES_WITHLDAP}
echo " + intégration LDAP"
src=$(echo $line | awk -F '[[:space:]]*' '{print $1}')
dst=$(echo $line | awk -F '[[:space:]]*' '{print $2}')
OOLDIFS=${IFS}
IFS=","
LIST=""
for alias in ${dst}
do
LIST=${LIST}"mail: $alias\n"
done
IFS=${OOLDIFS}
echo -e "\n\ndn: cn=${src},ou=mailForwardings,${ldap_root}\n\
changeType: add\n\
objectClass: organizationalRole\n\
\n\n\
dn: cn=${src},ou=mailForwardings,${ldap_root}\n\
changeType: modify\n\
replace: objectClass\n\
objectClass: organizationalRole\n\
objectClass: PostfixBookMailForward\n\
-\n\
replace: mailAlias\n\
mailAlias: ${src}\n\
-\n\
replace: mail\n\
${LIST}\n\n" | ldapmodify -c -H ldap://${LDAP_IP} -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -x -w ${ldap_LDAP_ADMIN_PASSWORD}
fi
done
IFS=${OLDIFS}

20
bin/ldap/tests/nc_orphans.sh Executable file
View File

@ -0,0 +1,20 @@
#!/bin/bash
KAZ_ROOT=/kaz
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
LDAP_IP=$(docker inspect -f '{{.NetworkSettings.Networks.ldapNet.IPAddress}}' ldapServ)
docker exec -i nextcloudDB mysql --user=${nextcloud_MYSQL_USER} --password=${nextcloud_MYSQL_PASSWORD} ${nextcloud_MYSQL_DATABASE} <<< "select uid from oc_users;" > /tmp/nc_users.txt
OLDIFS=${IFS}
IFS=$'\n'
for line in `cat /tmp/nc_users.txt`; do
result=$(ldapsearch -h $LDAP_IP -D "cn=${ldap_LDAP_ADMIN_USERNAME},${ldap_root}" -w ${ldap_LDAP_ADMIN_PASSWORD} -b $ldap_root -x "(identifiantKaz=${line})" | grep numEntries)
echo "${line} ${result}" | grep -v "numEntries: 1" | grep -v "^uid"
done
IFS=${OLDIFS}

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.2 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

BIN
bin/look/feminin/kazmel.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.8 KiB

BIN
bin/look/greve/kaz-tete.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.0 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 72 KiB

BIN
bin/look/greve/kazdate.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

BIN
bin/look/greve/kazmel.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.7 KiB

BIN
bin/look/kaz/kaz-entier.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.3 KiB

BIN
bin/look/kaz/kaz-tete.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 KiB

66
bin/look/kaz/kaz-tete.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 71 KiB

BIN
bin/look/kaz/kazdate.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

BIN
bin/look/kaz/kazmel.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 KiB

BIN
bin/look/noel/kaz-tete.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 116 KiB

BIN
bin/look/noel/kazdate.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

BIN
bin/look/noel/kazmel.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.6 KiB

180
bin/manageAgora.sh Executable file
View File

@ -0,0 +1,180 @@
#!/bin/bash
# Script de manipulation d'un mattermost'
# init /versions / restart ...
#
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
#GLOBAL VARS
PRG=$(basename $0)
availableOrga=($(getList "${KAZ_CONF_DIR}/container-orga.list"))
AVAILABLE_ORGAS=${availableOrga[*]//-orga/}
QUIET="1"
ONNAS=
AGORACOMMUN="OUI_PAR_DEFAUT"
DockerServName=${mattermostServName}
declare -A Posts
usage() {
echo "${PRG} [OPTION] [COMMANDES] [ORGA]
Manipulation d'un mattermost
OPTIONS
-h|--help Cette aide :-)
-n|--simu SIMULATION
-q|--quiet On ne parle pas (utile avec le -n pour avoir que les commandes)
--nas L'orga se trouve sur le NAS !
COMMANDES (on peut en mettre plusieurs dans l'ordre souhaité)
-I|--install L'initialisation du mattermost
-v|--version Donne la version du mattermost et signale les MàJ
-mmctl \"command\" Envoie une commande via mmctl ** SPECIFIQUES **
-p|--post \"team\" \"message\" Poste un message dans une team agora ** AGORA **
ORGA parmi : ${AVAILABLE_ORGAS}
ou vide si mattermost commun
"
}
Init(){
NOM=$ORGA
if [ -n "$AGORACOMMUN" ] ; then NOM="KAZ" ; fi
CONF_FILE="${DOCK_VOL}/orga_${ORGA}-matterConfig/_data/config.json"
if [ -n "${AGORACOMMUN}" ]; then
CONF_FILE="${DOCK_VOL}/mattermost_matterConfig/_data/config.json"
elif [ -n "${ONNAS}" ]; then
CONF_FILE="${NAS_VOL}/orga_${ORGA}-matterConfig/_data/config.json"
fi
${SIMU} sed -i \
-e 's|"SiteURL": ".*"|"SiteURL": "'${MATTER_URL}'"|g' \
-e 's|"ListenAddress": ".*"|"ListenAddress": ":'${matterPort}'"|g' \
-e 's|"WebsocketURL": ".*"|"WebsocketURL": "wss://'${MATTER_URI}'"|g' \
-e 's|"AllowCorsFrom": ".*"|"AllowCorsFrom": "'${domain}' '${MATTER_URI}':443 '${MATTER_URI}'"|g' \
-e 's|"ConsoleLevel": ".*"|"ConsoleLevel": "ERROR"|g' \
-e 's|"SendEmailNotifications": false|"SendEmailNotifications": true|g' \
-e 's|"FeedbackEmail": ".*"|"FeedbackEmail": "admin@'${domain}'"|g' \
-e 's|"FeedbackOrganization": ".*"|"FeedbackOrganization": "Cochez la KAZ du libre !"|g' \
-e 's|"ReplyToAddress": ".*"|"ReplyToAddress": "admin@'${domain}'"|g' \
-e 's|"SMTPServer": ".*"|"SMTPServer": "mail.'${domain}'"|g' \
-e 's|"SMTPPort": ".*"|"SMTPPort": "25"|g' \
-e 's|"DefaultServerLocale": ".*"|"DefaultServerLocale": "fr"|g' \
-e 's|"DefaultClientLocale": ".*"|"DefaultClientLocale": "fr"|g' \
-e 's|"AvailableLocales": ".*"|"AvailableLocales": "fr"|g' \
${CONF_FILE}
# on redémarre pour prendre en compte (changement de port)
${SIMU} docker restart "${DockerServName}"
[ $? -ne 0 ] && printKazError "$DockerServName est down : impossible de terminer l'install" && return 1 >& $QUIET
${SIMU} waitUrl "$MATTER_URL" 300
[ $? -ne 0 ] && printKazError "$DockerServName ne parvient pas à démarrer correctement : impossible de terminer l'install" && return 1 >& $QUIET
# creation compte admin
${SIMU} curl -i -d "{\"email\":\"${mattermost_MM_ADMIN_EMAIL}\",\"username\":\"${mattermost_user}\",\"password\":\"${mattermost_pass}\",\"allow_marketing\":true}" "${MATTER_URL}/api/v4/users"
MM_TOKEN=$(_getMMToken ${MATTER_URL})
#on crée la team
${SIMU} curl -i -H "Authorization: Bearer ${MM_TOKEN}" -d "{\"display_name\":\"${NOM}\",\"name\":\"${NOM,,}\",\"type\":\"O\"}" "${MATTER_URL}/api/v4/teams"
}
Version(){
VERSION=$(docker exec "$DockerServName" bin/mmctl version | grep -i version:)
echo "Version $DockerServName : ${GREEN}${VERSION}${NC}"
}
_getMMToken(){
#$1 MATTER_URL
${SIMU} curl -i -s -d "{\"login_id\":\"${mattermost_user}\",\"password\":\"${mattermost_pass}\"}" "${1}/api/v4/users/login" | grep 'token' | sed 's/token:\s*\(.*\)\s*/\1/' | tr -d '\r'
}
PostMessage(){
printKazMsg "Envoi à $TEAM : $MESSAGE" >& $QUIET
${SIMU} docker exec -ti "${DockerServName}" bin/mmctl auth login "${MATTER_URL}" --name local-server --username ${mattermost_user} --password ${mattermost_pass}
${SIMU} docker exec -ti "${DockerServName}" bin/mmctl post create "${TEAM}" --message "${MESSAGE}"
}
MmctlCommand(){
# $1 command
${SIMU} docker exec -u 33 "$DockerServName" bin/mmctl $1
}
########## Main #################
for ARG in "$@"; do
if [ -n "${GETMMCTLCOMAND}" ]; then # après un -mmctl
MMCTLCOMAND="${ARG}"
GETMMCTLCOMAND=
elif [ -n "${GETTEAM}" ]; then # après un --post
GETMESSAGE="now"
GETTEAM=""
TEAM="${ARG}"
elif [ -n "${GETMESSAGE}" ]; then # après un --post "team:channel"
if [[ $TEAM == "-*" && ${#TEAM} -le 5 ]]; then echo "J'envoie mon message à \"${TEAM}\" ?? Arf, ça me plait pas j'ai l'impression que tu t'es planté sur la commande."; usage ; exit 1 ; fi
if [[ $ARG == "-*" && ${#ARG} -le 5 ]]; then echo "J'envoie le message \"${ARG}\" ?? Arf, ça me plait pas j'ai l'impression que tu t'es planté sur la commande."; usage ; exit 1 ; fi
if [[ ! $TEAM =~ .*:.+ ]]; then echo "Il faut mettre un destinataire sous la forme team:channel. Recommence !"; usage ; exit 1 ; fi
MESSAGE="$ARG"
GETMESSAGE=""
else
case "${ARG}" in
'-h' | '--help' )
usage && exit ;;
'-n' | '--simu')
SIMU="echo" ;;
'-q' )
QUIET="/dev/null" ;;
'--nas' | '-nas' )
ONNAS="SURNAS" ;;
'-v' | '--version')
COMMANDS="$(echo "${COMMANDS} VERSION" | sed "s/\s/\n/g" | sort | uniq)" ;;
'-I' | '--install' )
COMMANDS="$(echo "${COMMANDS} INIT" | sed "s/\s/\n/g" | sort | uniq)" ;; # le sed sort uniq, c'est pour pas l'avoir en double
'--mmctl' | '-mmctl' )
COMMANDS="$(echo "${COMMANDS} RUN-AGORA-MMCTL" | sed "s/\s/\n/g" | sort | uniq)"
GETMMCTLCOMAND="now" ;;
'-p' | '--post' )
COMMANDS="$(echo "${COMMANDS} POST-AGORA" | sed "s/\s/\n/g" | sort | uniq)"
GETTEAM="now" ;;
'-*' ) # ignore
;;
*)
ORGA="${ARG%-orga}"
DockerServName="${ORGA}-${mattermostServName}"
AGORACOMMUN=
;;
esac
fi
done
if [ -z "${COMMANDS}" ]; then usage && exit ; fi
MATTER_URI="${ORGA}-${matterHost}.${domain}"
if [ -n "$AGORACOMMUN" ]; then MATTER_URI="${matterHost}.${domain}" ; fi
MATTER_URL="${httpProto}://${MATTER_URI}"
for COMMAND in ${COMMANDS}; do
case "${COMMAND}" in
'VERSION' )
Version && exit ;;
'INIT' )
Init ;;
'RUN-AGORA-MMCTL' )
MmctlCommand "$MMCTLCOMAND" ;;
'POST-AGORA' )
PostMessage ;;
esac
done

117
bin/manageCastopod.sh Executable file
View File

@ -0,0 +1,117 @@
#!/bin/bash
# Script de manipulation d'un wordpress'
# init /versions / restart ...
#
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
#GLOBAL VARS
PRG=$(basename $0)
availableOrga=($(getList "${KAZ_CONF_DIR}/container-orga.list"))
AVAILABLE_ORGAS=${availableOrga[*]//-orga/}
QUIET="1"
ONNAS=
CASTOPOD_COMMUN="OUI_PAR_DEFAUT"
DockerServName=${castopodServName}
declare -A Posts
usage() {
echo "${PRG} [OPTION] [COMMANDES] [ORGA]
Manipulation d'un castopod
OPTIONS
-h|--help Cette aide :-)
-n|--simu SIMULATION
-q|--quiet On ne parle pas (utile avec le -n pour avoir que les commandes)
--nas L'orga se trouve sur le NAS !
COMMANDES (on peut en mettre plusieurs dans l'ordre souhaité)
-I|--install L'initialisation du castopod
-v|--version Donne la version du castopod et signale les MàJ
ORGA parmi : ${AVAILABLE_ORGAS}
ou vide si castopod commun
"
}
Init(){
POD_URL="${httpProto}://${ORGA}-${castopodHost}.${domain}"
[ -n "${CASTOPOD_COMMUN}" ] && POD_URL="${httpProto}://${castopodHost}.${domain}"
if ! [[ "$(docker ps -f name=${DockerServName} | grep -w ${DockerServName})" ]]; then
printKazError "Castopod not running... abort"
exit
fi
echo "\n *** Premier lancement de Castopod" >& $QUIET
${SIMU} waitUrl "${POD_URL}"
CI_SESSION=$(echo ${headers} | grep "ci_session" | sed "s/.*ci_session=//")
cookies=$(curl -c - ${POD_URL})
CSRF_TOKEN=$(curl --cookie <(echo "$cookies") ${POD_URL}/cp-install | grep "csrf_test_name" | sed "s/.*value=.//" | sed "s/.>//")
#echo ${CSRF_TOKEN}
${SIMU} curl --cookie <(echo "$cookies") -X POST \
-d "username=${castopod_ADMIN_USER}" \
-d "password=${castopod_ADMIN_PASSWORD}" \
-d "email=${castopod_ADMIN_MAIL}" \
-d "csrf_test_name=${CSRF_TOKEN}" \
"${POD_URL}/cp-install/create-superadmin"
}
Version(){
VERSION="TODO"
echo "Version $DockerServName : ${GREEN}${VERSION}${NC}"
}
########## Main #################
for ARG in "$@"; do
case "${ARG}" in
'-h' | '--help' )
usage && exit ;;
'-n' | '--simu')
SIMU="echo" ;;
'-q' )
QUIET="/dev/null" ;;
'--nas' | '-nas' )
ONNAS="SURNAS" ;;
'-v' | '--version')
COMMANDS="$(echo "${COMMANDS} VERSION" | sed "s/\s/\n/g" | sort | uniq)" ;;
'-I' | '--install' )
COMMANDS="$(echo "${COMMANDS} INIT" | sed "s/\s/\n/g" | sort | uniq)" ;; # le sed sort uniq, c'est pour pas l'avoir en double
'-*' ) # ignore
;;
*)
ORGA="${ARG%-orga}"
DockerServName="${ORGA}-${castopodServName}"
CASTOPOD_COMMUN=
;;
esac
done
if [ -z "${COMMANDS}" ]; then usage && exit ; fi
for COMMAND in ${COMMANDS}; do
case "${COMMAND}" in
'VERSION' )
Version && exit ;;
'INIT' )
Init ;;
esac
done

393
bin/manageCloud.sh Executable file
View File

@ -0,0 +1,393 @@
#!/bin/bash
# Script de manipulation d'un cloud'
# init /versions / restart ...
#
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
#GLOBAL VARS
PRG=$(basename $0)
availableOrga=($(getList "${KAZ_CONF_DIR}/container-orga.list"))
AVAILABLE_ORGAS=${availableOrga[*]//-orga/}
# CLOUD
APPLIS_PAR_DEFAUT="tasks calendar contacts bookmarks mail richdocuments external drawio snappymail ransomware_protection" #rainloop richdocumentscode
QUIET="1"
ONNAS=
CLOUDCOMMUN="OUI_PAR_DEFAUT"
DockerServName=${nextcloudServName}
usage() {
echo "${PRG} [OPTION] [COMMANDES] [ORGA]
Manipulation d'un cloud
OPTIONS
-h|--help Cette aide :-)
-n|--simu SIMULATION
-q|--quiet On ne parle pas (utile avec le -n pour avoir que les commandes)
--nas L'orga se trouve sur le NAS !
COMMANDES (on peut en mettre plusieurs dans l'ordre souhaité)
-I|--install L'initialisation du cloud
-v|--version Donne la version du cloud et signale les MàJ
--optim Lance la procédure Nextcloud pour optimiser les performances ** **
-occ \"command\" Envoie une commande via occ ** **
-u Mets à jour les applis ** SPECIFIQUES **
-i Install des applis ** CLOUD **
-a \"app1 app2 ...\" Choix des appli à installer ou mettre à jour (entre guillemets) ** **
-U|--upgrade Upgrade des clouds ** **
-O|--officeURL MAJ le office de ce nextcloud ** **
ORGA parmi : ${AVAILABLE_ORGAS}
ou vide si cloud commun
"
}
##################################
############### CLOUD ############
##################################
Init(){
NOM=$ORGA
[ -n "${CLOUDCOMMUN}" ] && NOM="commun"
if [ -z "${LISTE_APPS}" ]; then
printKazMsg "Aucune appli n'est précisée, j'installerais les applis par défaut : ${APPLIS_PAR_DEFAUT}" >& $QUIET
LISTE_APPS="${APPLIS_PAR_DEFAUT}"
fi
checkDockerRunning "$DockerServName" "$NOM"
[ $? -ne 0 ] && echo "${CYAN}\n $DockerServName est down : impossible de terminer l'install${NC}" && return 1 >& $QUIET
CONF_FILE="${DOCK_VOL}/orga_${ORGA}-cloudConfig/_data/config.php"
CLOUD_URL="https://${ORGA}-${cloudHost}.${domain}"
if [ -n "$CLOUDCOMMUN" ]; then
CONF_FILE="${DOCK_VOL}/cloud-cloudConfig/_data/config.php"
CLOUD_URL="https://${cloudHost}.${domain}"
elif [ -n "${ONNAS}" ]; then
CONF_FILE="${NAS_VOL}/orga_${ORGA}-cloudConfig/_data/config.php"
fi
firstInstall "$CLOUD_URL" "$CONF_FILE" " NextCloud de $NOM"
updatePhpConf "$CONF_FILE"
InstallApplis
echo "${CYAN} *** Paramétrage richdocuments pour $ORGA${NC}" >& $QUIET
setOfficeUrl
occCommand "config:app:set --value 10.0.0.0/8,172.16.0.0/12,192.168.0.0/16 richdocuments wopi_allowlist"
occCommand "config:system:set overwrite.cli.url --value=$CLOUD_URL"
occCommand "config:system:set disable_certificate_verification --value=true"
if [ -n "$CLOUDCOMMUN" ]; then initLdap "$NOM" ; fi
}
Version(){
VERSION=$(docker exec -u 33 ${DockerServName} /var/www/html/occ status | grep -i version:)
VERSION_UPDATE=$(docker exec -u 33 ${DockerServName} /var/www/html/occ update:check | grep -i "available\." | cut -c 1-17)
versionSTR="Version ${DockerServName} : ${GREEN}${VERSION}${NC}"
[ -n "${VERSION_UPDATE}" ] && versionSTR="$versionSTR -- Disponible : ${RED} ${VERSION_UPDATE} ${NC}"
echo "$versionSTR"
}
firstInstall(){
# $1 CLOUD_URL
# $2 phpConfFile
# $3 orga
if ! grep -q "'installed' => true," "$2" 2> /dev/null; then
printKazMsg "\n *** Premier lancement de $3" >& $QUIET
${SIMU} waitUrl "$1"
${SIMU} curl -X POST \
-d "install=true" \
-d "adminlogin=${nextcloud_NEXTCLOUD_ADMIN_USER}" \
-d "adminpass=${nextcloud_NEXTCLOUD_ADMIN_PASSWORD}" \
-d "directory=/var/www/html/data" \
-d "dbtype=mysql" \
-d "dbuser=${nextcloud_MYSQL_USER}" \
-d "dbpass=${nextcloud_MYSQL_PASSWORD}" \
-d "dbname=${nextcloud_MYSQL_DATABASE}" \
-d "dbhost=${nextcloud_MYSQL_HOST}" \
-d "install-recommended-apps=true" \
"$1"
fi
}
setOfficeUrl(){
OFFICE_URL="https://${officeHost}.${domain}"
if [ ! "${site}" = "prod1" ]; then
OFFICE_URL="https://${site}-${officeHost}.${domain}"
fi
occCommand "config:app:set --value $OFFICE_URL richdocuments public_wopi_url"
occCommand "config:app:set --value $OFFICE_URL richdocuments wopi_url"
occCommand "config:app:set --value $OFFICE_URL richdocuments disable_certificate_verification"
}
initLdap(){
# $1 Nom du cloud
echo "${CYAN} *** Installation LDAP pour $1${NC}" >& $QUIET
occCommand "app:enable user_ldap" "${DockerServName}"
occCommand "ldap:delete-config s01" "${DockerServName}"
occCommand "ldap:create-empty-config" "${DockerServName}"
occCommand "ldap:set-config s01 ldapAgentName cn=cloud,ou=applications,${ldap_root}" "${DockerServName}"
occCommand "ldap:set-config s01 ldapAgentPassword ${ldap_LDAP_CLOUD_PASSWORD}" "${DockerServName}"
occCommand "ldap:set-config s01 ldapBase ${ldap_root}" "${DockerServName}"
occCommand "ldap:set-config s01 ldapBaseGroups ${ldap_root}" "${DockerServName}"
occCommand "ldap:set-config s01 ldapBaseUsers ou=users,${ldap_root}" "${DockerServName}"
occCommand "ldap:set-config s01 ldapExpertUsernameAttr identifiantKaz" "${DockerServName}"
occCommand "ldap:set-config s01 ldapHost ${ldapServName}" "${DockerServName}"
occCommand "ldap:set-config s01 ldapPort 389" "${DockerServName}"
occCommand "ldap:set-config s01 ldapTLS 0" "${DockerServName}"
occCommand "ldap:set-config s01 ldapLoginFilter \"(&(objectclass=nextcloudAccount)(|(cn=%uid)(identifiantKaz=%uid)))\"" "${DockerServName}"
occCommand "ldap:set-config s01 ldapQuotaAttribute nextcloudQuota" "${DockerServName}"
occCommand "ldap:set-config s01 ldapUserFilter \"(&(objectclass=nextcloudAccount)(nextcloudEnabled=TRUE))\"" "${DockerServName}"
occCommand "ldap:set-config s01 ldapUserFilterObjectclass nextcloudAccount" "${DockerServName}"
occCommand "ldap:set-config s01 ldapEmailAttribute mail" "${DockerServName}"
occCommand "ldap:set-config s01 ldapUserDisplayName cn" "${DockerServName}"
occCommand "ldap:set-config s01 ldapUserFilterMode 1" "${DockerServName}"
occCommand "ldap:set-config s01 ldapConfigurationActive 1" "${DockerServName}"
# Dans le mariadb, pour permettre au ldap de reprendre la main : delete from oc_users where uid<>'admin';
# docker exec -i nextcloudDB mysql --user=<user> --password=<password> <db> <<< "delete from oc_users where uid<>'admin';"
# Doc : https://help.nextcloud.com/t/migration-to-ldap-keeping-users-and-data/13205
# Exemple de table/clés :
# +-------------------------------+----------------------------------------------------------+
# | Configuration | s01 |
# +-------------------------------+----------------------------------------------------------+
# | hasMemberOfFilterSupport | 0 |
# | homeFolderNamingRule | |
# | lastJpegPhotoLookup | 0 |
# | ldapAgentName | cn=cloud,ou=applications,dc=kaz,dc=sns |
# | ldapAgentPassword | *** |
# | ldapAttributesForGroupSearch | |
# | ldapAttributesForUserSearch | |
# | ldapBackgroundHost | |
# | ldapBackgroundPort | |
# | ldapBackupHost | |
# | ldapBackupPort | |
# | ldapBase | ou=users,dc=kaz,dc=sns |
# | ldapBaseGroups | ou=users,dc=kaz,dc=sns |
# | ldapBaseUsers | ou=users,dc=kaz,dc=sns |
# | ldapCacheTTL | 600 |
# | ldapConfigurationActive | 1 |
# | ldapConnectionTimeout | 15 |
# | ldapDefaultPPolicyDN | |
# | ldapDynamicGroupMemberURL | |
# | ldapEmailAttribute | mail |
# | ldapExperiencedAdmin | 0 |
# | ldapExpertUUIDGroupAttr | |
# | ldapExpertUUIDUserAttr | |
# | ldapExpertUsernameAttr | uid |
# | ldapExtStorageHomeAttribute | |
# | ldapGidNumber | gidNumber |
# | ldapGroupDisplayName | cn |
# | ldapGroupFilter | |
# | ldapGroupFilterGroups | |
# | ldapGroupFilterMode | 0 |
# | ldapGroupFilterObjectclass | |
# | ldapGroupMemberAssocAttr | |
# | ldapHost | ldap |
# | ldapIgnoreNamingRules | |
# | ldapLoginFilter | (&(|(objectclass=nextcloudAccount))(cn=%uid)) |
# | ldapLoginFilterAttributes | |
# | ldapLoginFilterEmail | 0 |
# | ldapLoginFilterMode | 0 |
# | ldapLoginFilterUsername | 1 |
# | ldapMatchingRuleInChainState | unknown |
# | ldapNestedGroups | 0 |
# | ldapOverrideMainServer | |
# | ldapPagingSize | 500 |
# | ldapPort | 389 |
# | ldapQuotaAttribute | nextcloudQuota |
# | ldapQuotaDefault | |
# | ldapTLS | 0 |
# | ldapUserAvatarRule | default |
# | ldapUserDisplayName | cn |
# | ldapUserDisplayName2 | |
# | ldapUserFilter | (&(objectclass=nextcloudAccount)(nextcloudEnabled=TRUE)) |
# | ldapUserFilterGroups | |
# | ldapUserFilterMode | 1 |
# | ldapUserFilterObjectclass | nextcloudAccount |
# | ldapUuidGroupAttribute | auto |
# | ldapUuidUserAttribute | auto |
# | turnOffCertCheck | 0 |
# | turnOnPasswordChange | 0 |
# | useMemberOfToDetectMembership | 1 |
# +-------------------------------+----------------------------------------------------------+
}
updatePhpConf(){
# $1 php_conf_file
if [ $# -ne 1 ]; then
echo "${RED}#Je ne sais pas ou écrire la conf php !${NC}"
return 1
fi
echo "${CYAN} *** Maj de la conf $1${NC}" >& $QUIET
PHPCONF="$1"
_addVarAfterInConf "default_language" " 'default_language' => 'fr'," "CONFIG = array (" "${PHPCONF}"
_addVarAfterInConf "theme" " 'theme' => ''," "'installed' => true," "${PHPCONF}"
_addVarAfterInConf "default_phone_region" " 'default_phone_region' => 'FR'," "'installed' => true," "${PHPCONF}"
_addVarAfterInConf "loglevel" " 'loglevel' => 2," "'installed' => true," "${PHPCONF}"
_addVarAfterInConf "maintenance" " 'maintenance' => false," "'installed' => true," "${PHPCONF}"
_addVarAfterInConf "app_install_overwrite" " 'app_install_overwrite' => \n array (\n 0 => 'documents',\n )," "'installed' => true," "${PHPCONF}"
_addVarAfterInConf "overwriteprotocol" " 'overwriteprotocol' => 'https'," "'installed' => true," "${PHPCONF}"
_addVarAfterInConf "mail_domain" " 'mail_domain' => '${domain}'," "'installed' => true," "${PHPCONF}"
_addVarAfterInConf "mail_from_address" " 'mail_from_address' => 'admin'," "'installed' => true," "${PHPCONF}"
_addVarAfterInConf "mail_smtpport" " 'mail_smtpport' => '25'," "'installed' => true," "${PHPCONF}"
_addVarAfterInConf "mail_sendmailmode" " 'mail_sendmailmode' => 'smtp'," "'installed' => true," "${PHPCONF}"
_addVarAfterInConf "mail_smtphost" " 'mail_smtphost' => '${smtpHost}.${domain}'," "'installed' => true," "${PHPCONF}"
_addVarAfterInConf "mail_smtpmode" " 'mail_smtpmode' => 'smtp'," "'installed' => true," "${PHPCONF}"
_addVarAfterInConf "enable_previews" " 'enable_previews' => true," "'installed' => true," "${PHPCONF}"
_addVarAfterInConf "trashbin_retention_obligation" " 'trashbin_retention_obligation' => '30, auto'," "'installed' => true," "${PHPCONF}"
#pour supprimer le message "obtenir un compte gratuit" dans le footer
_addVarAfterInConf "simpleSignUpLink.shown" " 'simpleSignUpLink.shown' => false," "'installed' => true," "${PHPCONF}"
_addVarAfterInConf "trusted_proxies" " 'trusted_proxies' => array( 0 => '10.0.0.0/8', 1 => '172.16.0.0/12', 2 => '192.168.0.0/16' )," "'installed' => true," "${PHPCONF}"
}
UpgradeClouds() {
echo "${NC}--------------------------------------------------------" >& $QUIET
echo "UPGRADE des cloud" >& $QUIET
echo "--------------------------------------------------------" >& $QUIET
occCommand "upgrade"
}
OptimiseClouds() {
occCommand "db:add-missing-indices" "db:convert-filecache-bigint --no-interaction"
}
UpdateApplis() {
printKazMsg "UPDATE DES APPLIS du cloud ${DockerServName} : ${LISTE_APPS}" >& $QUIET
if [ -z "${LISTE_APPS}" ]; then
occCommand "app:update --all"
return
fi
echo "Mise à jour de ${LISTE_APPS}" >& $QUIET
for app in ${LISTE_APPS}
do
occCommand "app:update ${app}"
done
}
InstallApplis(){
if [ -z "${LISTE_APPS}" ]; then
printKazMsg "Aucune appli n'est précisée, j'installe les applis par défaut : ${APPLIS_PAR_DEFAUT}" >& $QUIET
LISTE_APPS="${APPLIS_PAR_DEFAUT}"
fi
apps=$LISTE_APPS
if ! [[ "$(docker ps -f name=${DockerServName} | grep -w ${DockerServName})" ]]; then
printKazError "${RED}# ${DockerServName} not running... impossible d'installer les applis${NC}" >& $QUIET
return 1
fi
LIST_ALL=$(docker exec -ti -u 33 "${DockerServName}" /var/www/html/occ app:list |
awk 'BEGIN {cp=0}
/Enabled:/ {cp=1 ; next};
/Disabled:/ {cp=0; next};
{if (cp) print $0};')
for app in $apps
do
grep -wq "${app}" <<<"${LIST_ALL}" 2>/dev/null && echo "${app} dejà installée" >& $QUIET && continue
echo " install ${app}" >& $QUIET
occCommand "app:install ${app}"
done
}
occCommand(){
# $1 Command
${SIMU} docker exec -u 33 $DockerServName /var/www/html/occ $1
}
_addVarAfterInConf(){
# $1 key
# $2 val
# $3 where
# $4 confFile
if ! grep -q "$1" "${4}" ; then
echo -n " ${CYAN}${BOLD}$1${NC}" >& $QUIET
${SIMU} sed -i -e "/$3/a\ $2" "$4"
fi
}
########## Main #################
for ARG in "$@"; do
if [ -n "${GETOCCCOMAND}" ]; then # après un -occ
OCCCOMAND="${ARG}"
GETOCCCOMAND=
elif [ -n "${GETAPPS}" ]; then # après un -a
LISTE_APPS="${LISTE_APPS} ${ARG}"
GETAPPS=""
else
case "${ARG}" in
'-h' | '--help' )
usage && exit ;;
'-n' | '--simu')
SIMU="echo" ;;
'-q' )
QUIET="/dev/null" ;;
'--nas' | '-nas' )
ONNAS="SURNAS" ;;
'-v' | '--version')
COMMANDS="$(echo "${COMMANDS} VERSION" | sed "s/\s/\n/g" | sort | uniq)" ;;
'-I' | '--install' )
COMMANDS="$(echo "${COMMANDS} INIT" | sed "s/\s/\n/g" | sort | uniq)" ;; # le sed sort uniq, c'est pour pas l'avoir en double
'-U' | '--upgrade')
COMMANDS="$(echo "${COMMANDS} UPGRADE" | sed "s/\s/\n/g" | sort | uniq)" ;;
'-O' | '--officeURL')
COMMANDS="$(echo "${COMMANDS} OFFICEURL" | sed "s/\s/\n/g" | sort | uniq)" ;;
'--optim' | '-optim' )
COMMANDS="$(echo "${COMMANDS} OPTIMISE-CLOUD" | sed "s/\s/\n/g" | sort | uniq)" ;;
'-u' )
COMMANDS="$(echo "${COMMANDS} UPDATE-CLOUD-APP" | sed "s/\s/\n/g" | sort | uniq)" ;;
'-i' )
COMMANDS="$(echo "${COMMANDS} INSTALL-CLOUD-APP" | sed "s/\s/\n/g" | sort | uniq)" ;;
'-a' )
GETAPPS="now" ;;
'--occ' | '-occ' )
COMMANDS="$(echo "${COMMANDS} RUN-CLOUD-OCC" | sed "s/\s/\n/g" | sort | uniq)"
GETOCCCOMAND="now" ;;
'-*' ) # ignore
;;
*)
ORGA="${ARG%-orga}"
DockerServName="${ORGA}-${nextcloudServName}"
CLOUDCOMMUN=
;;
esac
fi
done
if [ -z "${COMMANDS}" ]; then
usage && exit
fi
for COMMAND in ${COMMANDS}; do
case "${COMMAND}" in
'VERSION' )
Version && exit ;;
'OPTIMISE-CLOUD' )
OptimiseClouds ;;
'UPDATE-CLOUD-APP' )
UpdateApplis ;;
'UPGRADE' )
UpgradeClouds ;;
'INIT' )
Init ;;
'INSTALL-CLOUD-APP' )
InstallApplis ;;
'OFFICEURL' )
setOfficeUrl ;;
'RUN-CLOUD-OCC' )
occCommand "${OCCCOMAND}";;
esac
done

177
bin/manageWiki.sh Executable file
View File

@ -0,0 +1,177 @@
#!/bin/bash
# Script de manipulation d'un dokuwiki'
# init /versions / restart ...
#
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
#GLOBAL VARS
PRG=$(basename $0)
availableOrga=($(getList "${KAZ_CONF_DIR}/container-orga.list"))
AVAILABLE_ORGAS=${availableOrga[*]//-orga/}
DNLD_DIR="${KAZ_DNLD_DIR}/dokuwiki"
QUIET="1"
ONNAS=
WIKICOMMUN="OUI_PAR_DEFAUT"
DockerServName=${dokuwikiServName}
declare -A Posts
usage() {
echo "${PRG} [OPTION] [COMMANDES] [ORGA]
Manipulation d'un dokuwiki
OPTIONS
-h|--help Cette aide :-)
-n|--simu SIMULATION
-q|--quiet On ne parle pas (utile avec le -n pour avoir que les commandes)
--nas L'orga se trouve sur le NAS !
COMMANDES (on peut en mettre plusieurs dans l'ordre souhaité)
-I|--install L'initialisation du dokuwiki
-v|--version Donne la version du dokuwiki et signale les MàJ
--reload kill lighthttpd
ORGA parmi : ${AVAILABLE_ORGAS}
ou vide si dokuwiki commun
"
}
Init(){
NOM=$ORGA
if [ -n "$WIKICOMMUN" ] ; then NOM="KAZ" ; fi
TPL_DIR="${VOL_PREFIX}wikiLibtpl/_data"
PLG_DIR="${VOL_PREFIX}wikiPlugins/_data"
CONF_DIR="${VOL_PREFIX}wikiConf/_data"
# Gael, j'avais ajouté ça mais j'ai pas test alors je laisse comme avant ...
# A charge au prochain qui monte un wiki de faire qque chose
#WIKI_ROOT="${dokuwiki_WIKI_ROOT}"
#WIKI_EMAIL="${dokuwiki_WIKI_EMAIL}"
#WIKI_PASS="${dokuwiki_WIKI_PASSWORD}"
WIKI_ROOT=Kaz
WIKI_EMAIL=wiki@kaz.local
WIKI_PASS=azerty
${SIMU} checkDockerRunning "${DockerServName}" "${NOM}" || exit
if [ ! -f "${CONF_DIR}/local.php" ] ; then
echo "\n *** Premier lancement de Dokuwiki ${NOM}" >& $QUIET
${SIMU} waitUrl "${WIKI_URL}"
${SIMU} curl -X POST \
-A "Mozilla/5.0 (X11; Linux x86_64)" \
-d "l=fr" \
-d "d[title]=${NOM}" \
-d "d[acl]=true" \
-d "d[superuser]=${WIKI_ROOT}" \
-d "d[fullname]=Admin"\
-d "d[email]=${WIKI_EMAIL}" \
-d "d[password]=${WIKI_PASS}" \
-d "d[confirm]=${WIKI_PASS}" \
-d "d[policy]=1" \
-d "d[allowreg]=false" \
-d "d[license]=0" \
-d "d[pop]=false" \
-d "submit=Enregistrer" \
"${WIKI_URL}/install.php"
# XXX initialiser admin:<pass>:admin:<mel>:admin,user
#${SIMU} rsync -auHAX local.php users.auth.php acl.auth.php "${CONF_DIR}/"
${SIMU} sed -i "${CONF_DIR}/local.php" \
-e "s|\(.*conf\['title'\].*=.*'\).*';|\1${NOM}';|g" \
-e "s|\(.*conf\['title'\].*=.*'\).*';|\1${NOM}';|g" \
-e "/conf\['template'\]/d" \
-e '$a\'"\$conf['template'] = 'docnavwiki';"''
${SIMU} sed -i -e "s|\(.*conf\['lang'\].*=.*'\)en';|\1fr';|g" "${CONF_DIR}/dokuwiki.php"
${SIMU} chown -R www-data: "${CONF_DIR}/"
fi
${SIMU} unzipInDir "${DNLD_DIR}/docnavwiki.zip" "${TPL_DIR}/"
${SIMU} chown -R www-data: "${TPL_DIR}/"
# ckgedit : bof
for plugin in captcha smtp todo wrap wrapadd; do
${SIMU} unzipInDir "${DNLD_DIR}/${plugin}.zip" "${PLG_DIR}"
done
${SIMU} chown -R www-data: "${PLG_DIR}/"
}
Version(){
# $1 ContainerName
VERSION=$(docker exec $1 cat /dokuwiki/VERSION)
echo "Version $1 : ${GREEN}${VERSION}${NC}"
}
Reload(){
# $1 ContainerName
if [ -f "${VOL_PREFIX}wikiData/_data/farms/init.sh" ]; then
${SIMU} docker exec -ti "${1}" /dokuwiki/data/farms/init.sh
${SIMU} pkill -KILL lighttpd
fi
}
########## Main #################
for ARG in "$@"; do
case "${ARG}" in
'-h' | '--help' )
usage && exit ;;
'-n' | '--simu')
SIMU="echo" ;;
'-q' )
QUIET="/dev/null" ;;
'--nas' | '-nas' )
ONNAS="SURNAS" ;;
'-v' | '--version')
COMMANDS="$(echo "${COMMANDS} VERSION" | sed "s/\s/\n/g" | sort | uniq)" ;;
'--reload' )
COMMANDS="$(echo "${COMMANDS} RELOAD" | sed "s/\s/\n/g" | sort | uniq)" ;; # le sed sort uniq, c'est pour pas l'avoir en double
'-I' | '--install' )
COMMANDS="$(echo "${COMMANDS} INIT" | sed "s/\s/\n/g" | sort | uniq)" ;; # le sed sort uniq, c'est pour pas l'avoir en double
'-*' ) # ignore
;;
*)
ORGA="${ARG%-orga}"
DockerServName="${ORGA}-${dokuwikiServName}"
WIKICOMMUN=
;;
esac
done
if [ -z "${COMMANDS}" ]; then usage && exit ; fi
VOL_PREFIX="${DOCK_VOL}/orga_${ORGA}-"
WIKI_URL="${httpProto}://${ORGA}-${dokuwikiHost}.${domain}"
if [ -n "${WIKICOMMUN}" ]; then
VOL_PREFIX="${DOCK_VOL}/dokuwiki_doku"
WIKI_URL="${httpProto}://${dokuwikiHost}.${domain}"
elif [ -n "${ONNAS}" ]; then
VOL_PREFIX="${NAS_VOL}/orga_${ORGA}-"
fi
for COMMAND in ${COMMANDS}; do
case "${COMMAND}" in
'VERSION' )
Version "${DockerServName}" && exit ;;
'INIT' )
Init "${DockerServName}" ;;
'RELOAD' )
Reload "${DockerServName}";;
esac
done

130
bin/manageWp.sh Executable file
View File

@ -0,0 +1,130 @@
#!/bin/bash
# Script de manipulation d'un wordpress'
# init /versions / restart ...
#
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
#GLOBAL VARS
PRG=$(basename $0)
availableOrga=($(getList "${KAZ_CONF_DIR}/container-orga.list"))
AVAILABLE_ORGAS=${availableOrga[*]//-orga/}
QUIET="1"
ONNAS=
WPCOMMUN="OUI_PAR_DEFAUT"
DockerServName=${wordpressServName}
declare -A Posts
usage() {
echo "${PRG} [OPTION] [COMMANDES] [ORGA]
Manipulation d'un wordpress
OPTIONS
-h|--help Cette aide :-)
-n|--simu SIMULATION
-q|--quiet On ne parle pas (utile avec le -n pour avoir que les commandes)
--nas L'orga se trouve sur le NAS !
COMMANDES (on peut en mettre plusieurs dans l'ordre souhaité)
-I|--install L'initialisation du wordpress
-v|--version Donne la version du wordpress et signale les MàJ
ORGA parmi : ${AVAILABLE_ORGAS}
ou vide si wordpress commun
"
}
Init(){
PHP_CONF="${DOCK_VOL}/orga_${ORGA}-wordpress/_data/wp-config.php"
WP_URL="${httpProto}://${ORGA}-${wordpressHost}.${domain}"
if [ -n "${ONNAS}" ]; then
PHP_CONF="${NAS_VOL}/orga_${ORGA}-wordpress/_data/wp-config.php"
fi
if ! [[ "$(docker ps -f name=${DockerServName} | grep -w ${DockerServName})" ]]; then
printKazError "Wordpress not running... abort"
exit
fi
# XXX trouver un test du genre if ! grep -q "'installed' => true," "${PHP_CONF}" 2> /dev/null; then
echo "\n *** Premier lancement de WP" >& $QUIET
${SIMU} waitUrl "${WP_URL}"
${SIMU} curl -X POST \
-d "user_name=${wp_WORDPRESS_ADMIN_USER}" \
-d "admin_password=${wp_WORDPRESS_ADMIN_PASSWORD}" \
-d "admin_password2=${wp_WORDPRESS_ADMIN_PASSWORD}" \
-d "pw_weak=true" \
-d "admin_email=admin@kaz.bzh" \
-d "blog_public=0" \
-d "language=fr_FR" \
"${WP_URL}/wp-admin/install.php?step=2"
#/* pour forcer les maj autrement qu'en ftp */
_addVarBeforeInConf "FS_METHOD" "define('FS_METHOD', 'direct');" "\/\* That's all, stop editing! Happy publishing. \*\/" "$PHP_CONF"
}
Version(){
VERSION=$(docker exec $DockerServName cat /var/www/html/wp-includes/version.php | grep "wp_version " | sed -e "s/.*version\s*=\s*[\"\']//" | sed "s/[\"\'].*//")
echo "Version $DockerServName : ${GREEN}${VERSION}${NC}"
}
_addVarBeforeInConf(){
# $1 key
# $2 ligne à ajouter avant la ligne
# $3 where
# $4 fichier de conf php
if ! grep -q "$1" "${4}" ; then
echo -n " ${CYAN}${BOLD}$1${NC}" >& $QUIET
${SIMU} sed -i -e "s/$3/$2\\n$3/" "${4}"
fi
}
########## Main #################
for ARG in "$@"; do
case "${ARG}" in
'-h' | '--help' )
usage && exit ;;
'-n' | '--simu')
SIMU="echo" ;;
'-q' )
QUIET="/dev/null" ;;
'--nas' | '-nas' )
ONNAS="SURNAS" ;;
'-v' | '--version')
COMMANDS="$(echo "${COMMANDS} VERSION" | sed "s/\s/\n/g" | sort | uniq)" ;;
'-I' | '--install' )
COMMANDS="$(echo "${COMMANDS} INIT" | sed "s/\s/\n/g" | sort | uniq)" ;; # le sed sort uniq, c'est pour pas l'avoir en double
'-*' ) # ignore
;;
*)
ORGA="${ARG%-orga}"
DockerServName="${ORGA}-${wordpressServName}"
WPCOMMUN=
;;
esac
done
if [ -z "${COMMANDS}" ]; then usage && exit ; fi
for COMMAND in ${COMMANDS}; do
case "${COMMAND}" in
'VERSION' )
Version && exit ;;
'INIT' )
Init ;;
esac
done

148
bin/migVersProdX.sh Executable file
View File

@ -0,0 +1,148 @@
#!/bin/bash
#koi: pouvoir migrer une orga (data+dns) depuis PROD1 vers PRODx
#kan: 07/12/2023
#ki: françois puis fab (un peu)
KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
NAS_VOL="/mnt/disk-nas1/docker/volumes/"
#TODO: ce tab doit être construit à partir de la liste des machines dispos et pas en dur
tab_sites_destinations_possibles=("kazoulet" "prod2")
#par défaut, on prend le premier site
SITE_DST="${tab_sites_destinations_possibles[0]}"
declare -a availableOrga
availableOrga=($(getList "${KAZ_CONF_DIR}/container-orga.list"))
export SIMU=""
export COPY=""
usage () {
echo "Usage: $0 [-n] [-d host_distant] [-c] [orga]...[orga]"
echo " -h : this help"
echo " -d host_distant : ${SITE_DST} par défaut"
echo " -n : simulation"
echo " -c : only copy data but doesn't stop"
echo " [orgas] : in ${availableOrga[@]}"
echo " example : migVersProdX.sh -d kazoulet -c splann-orga && migVersProdX.sh -d kazoulet splann-orga"
exit 1
}
while getopts "hncd:" option; do
case ${option} in
h)
usage
exit 0
;;
n)
SIMU="echo"
;;
c)
COPY="true"
;;
d)
SITE_DST=${OPTARG}
;;
esac
done
# site distant autorisé ?
if [[ " ${tab_sites_destinations_possibles[*]} " == *" $SITE_DST "* ]]; then
true
else
echo
echo "${RED}${BOLD}Sites distants possibles : ${tab_sites_destinations_possibles[@]}${NC}"
echo
usage
exit 0
fi
# Récupérer les orgas dans un tableau
shift $((OPTIND-1))
Orgas=("$@")
#ces orgas existent-elles sur PROD1 ?
for orga in "${Orgas[@]}"; do
if [[ ! " ${availableOrga[@]} " =~ " ${orga} " ]]; then
echo
echo "Unknown orga: ${RED}${BOLD}${ARG}${orga}${NC}"
echo
usage
exit 0
fi
done
echo
echo "Site distant: ${GREEN}${BOLD}${SITE_DST}${NC}"
echo
#for orgaLong in ${Orgas}; do
# echo ${Orgas}
#done
#exit
for orgaLong in ${Orgas}; do
orgaCourt="${orgaLong%-orga}"
orgaLong="${orgaCourt}-orga"
echo "${BLUE}${BOLD}migration de ${orgaCourt}${NC}"
# if [ -d "${DOCK_VOL_PAHEKO_ORGA}/${orgaCourt}" ]; then
# if ! ssh -p 2201 root@${SITE_DST}.${domain} "test -d ${DOCK_VOL_PAHEKO_ORGA}/${orgaCourt}"; then
# echo "${RED}${BOLD} ... can't move paheko to ${SITE_DST}${NC}"
# echo " intall paheko in ${SITE_DST}.${domain} before!"
# continue
# fi
# fi
#on créé le répertoire de l'orga pour paheko sur SITE_DST s'il n'existe pas
#pratique quand paheko n'est pas encore installé sur PROD1 mais commandé
if [ -f "${KAZ_COMP_DIR}/${orgaLong}/usePaheko" ]; then
${SIMU} ssh -p 2201 root@${SITE_DST}.${domain} "mkdir -p ${DOCK_VOL_PAHEKO_ORGA}/${orgaCourt} && chown www-data:www-data ${DOCK_VOL_PAHEKO_ORGA}/${orgaCourt}"
#ensuite, on peut refaire la liste des routes paheko pour traefik
${SIMU} ssh -p 2201 root@${SITE_DST}.${domain} "cd ${KAZ_COMP_DIR}/paheko/ && ./docker-compose-gen.sh"
fi
if [ -z "${COPY}" ]; then
cd "${KAZ_COMP_DIR}/${orgaLong}"
docker-compose logs --tail 100| grep $(date "+ %Y-%m-%d")
checkContinue
${SIMU} docker-compose down
fi
if [ $(ls -d ${NAS_VOL}/orga_${orgaCourt}-* 2>/dev/null | wc -l) -gt 0 ]; then
echo "${BLUE}${BOLD} ... depuis nas${NC}"
${SIMU} rsync -aAhHX --info=progress2 --delete ${NAS_VOL}/orga_${orgaCourt}-* -e "ssh -p 2201" root@${SITE_DST}.${domain}:${DOCK_VOL}
else
echo "${BLUE}${BOLD} ... depuis disque${NC}"
${SIMU} rsync -aAhHX --info=progress2 --delete ${DOCK_VOL}/orga_${orgaCourt}-* -e "ssh -p 2201" root@${SITE_DST}.${domain}:${DOCK_VOL}
fi
if [ -z "${COPY}" ]; then
echo "${BLUE}${BOLD} ... config${NC}"
if [ -d "${DOCK_VOL_PAHEKO_ORGA}/${orgaCourt}" ]; then
${SIMU} rsync -aAhHX --info=progress2 --delete "${DOCK_VOL_PAHEKO_ORGA}/${orgaCourt}" -e "ssh -p 2201" root@${SITE_DST}.${domain}:"${DOCK_VOL_PAHEKO_ORGA}/"
fi
${SIMU} rsync -aAhHX --info=progress2 --delete ${KAZ_COMP_DIR}/${orgaLong} -e "ssh -p 2201" root@${SITE_DST}.${domain}:${KAZ_COMP_DIR}/
${SIMU} ssh -p 2201 root@${SITE_DST}.${domain} "grep -q '^${orgaLong}\$' /kaz/config/container-orga.list || echo ${orgaLong} >> /kaz/config/container-orga.list"
${SIMU} ssh -p 2201 root@${SITE_DST}.${domain} ${KAZ_COMP_DIR}/${orgaLong}/init-volume.sh
cd "${KAZ_COMP_DIR}/${orgaLong}"
${SIMU} ./orga-rm.sh
${SIMU} ssh -p 2201 root@${SITE_DST}.${domain} "${KAZ_COMP_DIR}/${orgaLong}/orga-gen.sh" --create
${SIMU} ssh -p 2201 root@${SITE_DST}.${domain} "${KAZ_BIN_DIR}/container.sh" start "${orgaLong}"
${SIMU} ssh -p 2201 root@${SITE_DST}.${domain} "${KAZ_BIN_DIR}/manageCloud.sh" --officeURL "${orgaCourt}"
fi
done

44
bin/migration.sh Executable file
View File

@ -0,0 +1,44 @@
#!/bin/bash
CV1=/kaz-old/bin/container.sh
DV1=/kaz-old/dockers
EV1=/kaz-old/config
SV1=/kaz-old/secret
BV2=/kaz/bin
DV2=/kaz/dockers
EV2=/kaz/config
SV2=/kaz/secret
OV2=/kaz/config/orgaTmpl/orga-gen.sh
[ -x "${CV1}" ] || exit
[ -d "${BV2}" ] || exit
SIMU="echo SIMU"
${SIMU} "${CV1}" stop orga
${SIMU} "${CV1}" stop
${SIMU} rsync "${EV1}/dockers.env" "${EV2}/"
${SIMU} rsync "${SV1}/SetAllPass.sh" "${SV2}/"
${SIMU} "${BV2}/updateDockerPassword.sh"
# XXX ? rsync /kaz/secret/allow_admin_ip /kaz-git/secret/allow_admin_ip
${SIMU} "${BV2}/container.sh" start cloud dokuwiki ethercalc etherpad framadate paheko gitea jirafeau mattermost postfix proxy roundcube web
${SIMU} rsync -aAHXh --info=progress2 "${DV1}/web/html/" "/var/lib/docker/volumes/web_html/_data/"
${SIMU} chown -R www-data: "/var/lib/docker/volumes/web_html/_data/"
${SIMU} cd "${DV1}"
cd "${DV1}"
for ORGA_DIR in *-orga; do
services=$(echo $([ -x "${ORGA_DIR}/tmpl-gen.sh" ] && "${ORGA_DIR}/tmpl-gen.sh" -l))
if [ -n "${services}" ]; then
ORGA="${ORGA_DIR%-orga}"
echo " * ${ORGA}: ${services}"
${SIMU} "${OV2}" "${ORGA}" $(for s in ${services}; do echo "+${s}"; done)
fi
done

172
bin/mvOrga2Nas.sh Executable file
View File

@ -0,0 +1,172 @@
#!/bin/bash
# déplace des orga de
# /var/lib/docker/volumes/
# vers
# /mnt/disk-nas1/docker/volumes/
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
DOCK_NAS="/mnt/disk-nas1/docker/volumes"
DOCK_SRC="${DOCK_VOL}"
DOCK_DST="${DOCK_NAS}"
export PRG="$0"
cd $(dirname $0)
. "${DOCKERS_ENV}"
declare -a availableOrga
availableOrga=($(sed -e "s/\(.*\)[ \t]*#.*$/\1/" -e "s/^[ \t]*\(.*\)-orga$/\1/" -e "/^$/d" "${KAZ_CONF_DIR}/container-orga.list"))
# no more export in .env
export $(set | grep "domain=")
export SIMU=""
export ONLY_SYNC=""
export NO_SYNC=""
export FORCE=""
export REVERSE=""
usage(){
echo "Usage: ${PRG} [orga...]"
echo " -h help"
echo " -n simulation"
echo " -y force"
echo " -s phase1 only"
echo " -r reverse (${DOCK_NAS} to ${DOCK_VOL})"
echo " -ns no pre sync"
exit 1
}
for ARG in $@; do
case "${ARG}" in
'-h' | '-help' )
usage
;;
'-n' )
shift
export SIMU="echo"
;;
'-y' )
shift
export FORCE="yes"
;;
'-s' )
shift
export ONLY_SYNC="yes"
;;
'-r' )
shift
export REVERSE="yes"
;;
'-ns' )
shift
export NO_SYNC="yes"
;;
*)
break
;;
esac
done
[[ -z "$1" ]] && usage
for ARG in $@; do
[[ ! " ${availableOrga[*]} " =~ " ${ARG} " ]] && echo "${RED}${ARG}${NC} is not an orga" && usage
done
########################################
# on copie dans new pour que le changement soit atomique
NEW=""
if [ -n "${REVERSE}" ]; then
DOCK_SRC="${DOCK_NAS}"
DOCK_DST="${DOCK_VOL}"
NEW="/new"
${SIMU} mkdir -p "${DOCK_DST}${NEW}"
fi
# on garde une copie dans back au cas où
BACK="${DOCK_SRC}/old"
echo " move from ${BLUE}${BOLD}${DOCK_SRC}${NC} to ${BLUE}${BOLD}${DOCK_DST}${NC}"
checkContinue
cd "${DOCK_SRC}"
volext=$(ls -d orga* | sed 's%.*-%%' | sort -u)
declare -a orgaPhase2
# Pour que l'interruption de service soit la plus courte possible,
# on pré-copie toutes les infos de puis la création de l'orga
echo -n "${BLUE}Phase 1: pre sync.${NC} "
[[ -z "${FORCE}" ]] && [[ -z "${NO_SYNC}" ]] && checkContinue
echo
for ARG in $@; do
for EXT in ${volext}; do
vol="orga_${ARG}-${EXT}"
# test le service existe
[ -e "${DOCK_SRC}/${vol}" ] || continue
# si c'est un lien sur /var/lib c'est déjà fait
[ -z "${REVERSE}" ] && [ -L "${DOCK_SRC}/${vol}" ] && echo "${GREEN}${vol}${NC} : done" && continue
# si c'est un lien sur le NAS c'est un problème
[ -n "${REVERSE}" ] && [ -L "${DOCK_SRC}/${vol}" ] && echo "${GREEN}${vol}${NC} : bug" && continue
# si c'est pas un répertoire c'est un problème
! [ -d "${DOCK_SRC}/${vol}" ] && echo "${RED}${vol}${NC} : done ?" && continue
# si transfert est déjà fait
if [ -n "${REVERSE}" ]; then
! [ -L "${DOCK_DST}/${vol}" ] && echo "${RED}${vol}${NC} : done" && continue
fi
echo " - ${YELLOW}${vol}${NC}"
[[ -z "${NO_SYNC}" ]] && ${SIMU} rsync -auHAX --info=progress2 "${DOCK_SRC}/${vol}/" "${DOCK_DST}${NEW}/${vol}/"
[[ " ${orgaPhase2[@]} " =~ " ${ARG} " ]] || orgaPhase2+=( "${ARG}" )
done
done
[ -n "${ONLY_SYNC}" ] && exit 0
if (( ${#orgaPhase2[@]} == 0 )); then
exit 0
fi
echo -n "${BLUE}Phase 2: mv.${NC} "
[[ -z "${FORCE}" ]] && checkContinue
echo
mkdir -p "${BACK}"
for ARG in "${orgaPhase2[@]}"; do
cd "${KAZ_ROOT}"
cd "${KAZ_COMP_DIR}/${ARG}-orga"
! [ -e "docker-compose.yml" ] && echo "no docker-compose.yml for ${RED}${ARG}${NC}" && continue
${SIMU} docker-compose down
# L'arrêt ne durera que le temps de copier les modifications depuis la phase 1.
for EXT in ${volext}; do
vol="orga_${ARG}-${EXT}"
# test le service existe
[ -e "${DOCK_SRC}/${vol}" ] || continue
# si c'est un lien sur /var/lib c'est déjà fait
[ -z "${REVERSE}" ] && [ -L "${DOCK_SRC}/${vol}" ] && echo "${GREEN}${vol}${NC} : done" && continue
# si c'est un lien sur le NAS c'est un problème
[ -n "${REVERSE}" ] && [ -L "${DOCK_SRC}/${vol}" ] && echo "${GREEN}${vol}${NC} : bug" && continue
# si c'est pas un répertoire c'est un problème
! [ -d "${DOCK_SRC}/${vol}" ] && echo "${RED}${vol}${NC} : done ?" && continue
# si transfert est déjà fait
if [ -n "${REVERSE}" ]; then
! [ -L "${DOCK_DST}/${vol}" ] && echo "${RED}${vol}${NC} : done" && continue
fi
echo " - ${YELLOW}${vol}${NC}"
${SIMU} rsync -auHAX --info=progress2 --delete "${DOCK_SRC}/${vol}/" "${DOCK_DST}${NEW}/${vol}/" || exit 1
${SIMU} mv "${DOCK_SRC}/${vol}" "${BACK}/"
if [ -z "${REVERSE}" ]; then
# cas de /var/lib vers NAS
${SIMU} ln -sf "${DOCK_DST}/${vol}" "${DOCK_SRC}/"
else
# cas NAS vers /var/lib
${SIMU} rm -f "${DOCK_SRC}/${vol}"
${SIMU} mv "${DOCK_DST}${NEW}/${vol}" "${DOCK_DST}/"
fi
done
${SIMU} docker-compose up -d
[[ -x "reload.sh" ]] && "./reload.sh"
echo
done

83
bin/nettoie Executable file
View File

@ -0,0 +1,83 @@
#!/bin/bash
POUBELLE="${HOME}/tmp/POUBELLE"
mkdir -p "${POUBELLE}"
usage () {
echo `basename "$0"` " [-] [-h] [-help] [-clean] [-wipe] [-n] [directory ...]"
echo " remove temporaries files"
echo " - Treat the following arguments as filenames \`-\' so that"
echo " you can specify filenames starting with a minus."
echo " -h"
echo " -help Display this help."
echo " -n Simulate the remove (juste print files)."
echo " directories are the roots where the purge had to be done. If no"
echo " roots are given, the root is the home directory."
}
DETRUIT=""
ANT_OPT=""
ANT_CMD=""
case "$1" in
'-' )
shift;;
'-n' )
DETRUIT="echo"
ANT_OPT="-p"
shift;;
'-clean' )
ANT_CMD="clean"
shift;;
'-wipe' )
ANT_CMD="wipe"
shift;;
'-h' | '-help' )
usage
shift
exit;;
esac
DIRS=$*
if test "$#" -le 1
then
DIRS="$*"
if test -z "$1" -o -d "$1"
then
cd $1 || exit
DIRS=.
fi
fi
if test "${ANT_CMD}" != ""
then
find $DIRS -type f -name build.xml -execdir ant -f {} "${ANT_CMD}" \;
find $DIRS -type f -name Makefile\* -execdir make -f {} "${ANT_CMD}" \;
exit
fi
find $DIRS -type d -name .xvpics -exec $DETRUIT rm -r {} \; -prune
find $DIRS '(' \
-type d -name POUBELLE -prune \
-o \
-type f '(' \
-name core -o -name '*.BAK' -o -name '*.bak' -o -name '*.CKP' \
-o -name '.*.BAK' -o -name '.*.bak' -o -name '.*.CKP' \
-o -name '.*.back' -o -name '*.back' \
-o -name '*.backup' -o -name '*.backup ' \
-o -name '.*.backup' -o -name '.*.backup ' \
-o -name .make.state \
-o -name 'untitled*' -o -name 'Sansnom' \
-o -name '.emacs_*' -o -name '.wi_*' \
-o -name 'ws_ftp.log' -o -name 'hs_err*.log' \
-o -name '#*' -o -name '*~' -o -name '.*~' -o -name junk \
-o -name '.~lock.*#' \
-o -name '*%' -o -name '.*%' \
')'\
-print -exec $DETRUIT mv -f '{}' "${POUBELLE}" \; \
')'
# -o -name '*.ps' -o -name '.*.ps' \
# -o -name '*.i' -o -name '*.ixx' \
# -o -name '.*.sav' -o -name '*.sav' \

24
bin/nextcloud_maintenance.sh Executable file
View File

@ -0,0 +1,24 @@
#!/bin/bash
#on récupère toutes les variables et mdp
KAZ_ROOT=/kaz
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
URL_AGORA=https://$matterHost.$domain/api/v4
EQUIPE=kaz
PostMattermost() {
PostM=$1
CHANNEL=$2
TEAMID=$(curl -s -H "Authorization: Bearer ${mattermost_token}" "${URL_AGORA}/teams/name/${EQUIPE}" | jq .id | sed -e 's/"//g')
CHANNELID=$(curl -s -H "Authorization: Bearer ${mattermost_token}" ${URL_AGORA}/teams/${TEAMID}/channels/name/${CHANNEL} | jq .id | sed -e 's/"//g')
curl -s i-X POST -i -H "Authorization: Bearer ${mattermost_token}" -d "{\"channel_id\":\"${CHANNELID}\",\"message\":\"${PostM}\"}" "${URL_AGORA}/posts" >/dev/null 2>&1
}
LISTEORGA=$(ls -F1 /var/lib/docker/volumes/ | grep cloudData | sed -e 's/^orga_//g' -e 's/-cloudData\///g')
for CLOUD in ${LISTEORGA}
do
/kaz/bin/gestContainers.sh -cloud -occ "maintenance:mode" ${CLOUD} | grep -i enable && PostMattermost "ERREUR : Le cloud ${CLOUD} sur ${site} est en mode maintenance" "Sysadmin-alertes"
done

27
bin/postfix-superviz.sh Executable file
View File

@ -0,0 +1,27 @@
#!/bin/bash
# supervision de sympa
#KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
KAZ_ROOT=/kaz
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
URL_AGORA=$(echo $matterHost).$(echo $domain)
MAX_QUEUE=50
OLDIFS=$IFS
IFS=" "
COUNT_MAILQ=$(docker exec -ti mailServ mailq | tail -n1 | gawk '{print $5}')
docker exec ${mattermostServName} bin/mmctl --suppress-warnings auth login $httpProto://$URL_AGORA --name local-server --username $mattermost_user --password $mattermost_pass >/dev/null 2>&1
if [ "${COUNT_MAILQ}" -gt "${MAX_QUEUE}" ]; then
echo "---------------------------------------------------------- "
echo -e "Mail queue Postfix ALert, Messages: ${RED}${COUNT_MAILQ}${NC}"
echo "---------------------------------------------------------- "
docker exec mattermostServ bin/mmctl post create kaz:Sysadmin-alertes --message "Alerte mailq Postfix : La file d' attente est de ${COUNT_MAILQ} messages" >/dev/null 2>&1
fi
IFS=${OLDIFS}

17
bin/runAlertings.sh Executable file
View File

@ -0,0 +1,17 @@
#!/bin/bash
KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
. "${DOCKERS_ENV}"
for dockerToTest in $KAZ_ROOT/dockers/*/alerting/; do
for aTest in $dockerToTest/*; do
res=$($aTest)
if [ -n "$res" ]; then
echo $res
docker exec -ti mattermostServ bin/mmctl post create kaz:Sysadmin-alertes --message "${res:0:1000}"
fi
done
done

37
bin/sauve_memory.sh Executable file
View File

@ -0,0 +1,37 @@
#! /bin/sh
# date: 30/03/2022
# koi: récupérer du swap et de la ram (uniquement sur les services qui laissent filer la mémoire)
# ki: fab
#pour vérifier les process qui prennent du swap : for file in /proc/*/status ; do awk '/Tgid|VmSwap|Name/{printf $2 " " $3}END{ print ""}' $file; done | grep kB | sort -k 3 -n
# Ces commandes donnent le nom du process, son PID et la taille mémoire en swap. Par exemple :
# dans /proc/<PID>/status y'a un VMSwap qui est la taille de swap utilisée par le process.
#calc
docker restart ethercalcDB ethercalcServ
#sympa
#docker restart sympaServ
#/kaz/dockers/sympa/reload.sh
# --> bof, ça arrête mal le bazar (4 mails d'ano autour de sympa_msg.pl / bounced.pl / task_manager.pl / bulk.pl / archived.pl)
#sympa
# restart de sympa et relance du script de copie des librairies du filtre de messages
#docker exec -it sympaServ service sympa stop
#sleep 5
#docker exec -it sympaServ service sympa start
#sleep 5
#/kaz/dockers/sympa/reload.sh
#sleep 2
#docker exec sympaServ chmod 777 /home/filter/filter.sh
#docker exec sympaServ sendmail -q
#pour restart cette s.... de collabora
/kaz/bin/gestContainers.sh -office -m -r
#postfix
docker exec -it mailServ supervisorctl restart changedetector
#proxy
docker exec -i proxyServ bash -c "/etc/init.d/nginx reload"

18
bin/sauve_serveur.sh Executable file
View File

@ -0,0 +1,18 @@
#! /bin/sh
# date: 12/11/2020
#PATH=/bin:/sbin:/usr/bin:/usr/sbin
PATH_SAUVE="/home/sauve/"
iptables-save > $PATH_SAUVE/iptables.sav
dpkg --get-selections > $PATH_SAUVE/dpkg_selection
tar -clzf $PATH_SAUVE/etc_sauve.tgz /etc 1> /dev/null 2> /dev/null
tar -clzf $PATH_SAUVE/var_spool.tgz /var/spool 1> /dev/null 2> /dev/null
tar -clzf $PATH_SAUVE/root.tgz /root 1> /dev/null 2> /dev/null
tar -clzf $PATH_SAUVE/kaz.tgz /kaz 1> /dev/null 2> /dev/null
rsync -a /var/spool/cron/crontabs $PATH_SAUVE
#sauve les bases
/kaz/bin/container.sh save

385
bin/scriptBorg.sh Executable file
View File

@ -0,0 +1,385 @@
#!/bin/bash
# --------------------------------------------------------------------------------------
# Didier
#
# Script de sauvegarde avec BorgBackup
# la commande de creation du dépot est : borg init --encryption=repokey /mnt/backup-nas1/BorgRepo
# la conf de borg est dans /root/.config/borg
# Le repository peut etre distant: BORG_REPO='ssh://user@host:port/path/to/repo'
# la clé ssh devra être copiée sur le site distant et l' init se fera sous la forme
# borg init --encryption=repokey ssh://user@host:port/path/to/repo
# la clé est modifiable avec la commande borg key change-passphrase
# toutes les variables sont dans la config générale de KAZ
# scripts PRE et Post
# les script pre et post doivent s' appelle pre_xxxxx.sh ou post_xxxx.sh
# La variable BORGSCRIPTS est le chemin du repertoire des scripts dans la config générale de Kaz
#####################################################
#KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
KAZ_ROOT=/kaz
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
VERSION="V-18-05-2024"
PRG=$(basename $0)
RACINE=$(echo $PRG | awk '{print $1}')
#IFS=' '
#####################################################
# Variables générales
#####################################################
# le volume monté ou sera le repo
# impérativement laisser vide dans le cas d' un repo distant
VOLUME_SAUVEGARDES=${borg_VOLUME_SAUVEGARDES}
# le repo Borg
export BORG_REPO=${borg_BORG_REPO}
# Le mot de passe du repo
export BORG_PASSPHRASE=${borg_BORG_PASSPHRASE}
# les personnes qui recevront le rapport de sauvegarde
MAIL_RAPPORT=${borg_MAIL_RAPPORT}
# Recevoir un mail quand la sauvegarde est OK ?
MAILOK=${borg_MAILOK}
MAILWARNING=${borg_MAILWARNING}
MAILDETAIL=${borg_MAILDETAIL}
# repertoire de montage des sauvegardes pour les restaurations
BORGMOUNT=${borg_BORGMOUNT}
# - la liste des repertoires à sauver séparés par un espace
LISTREPSAUV=${borg_LISTREPSAUV}
# - Les sauvegardes à garder jour, semaines, mois
NB_BACKUPS_JOUR=${borg_NB_BACKUPS_JOUR}
NB_BACKUPS_SEM=${borg_NB_BACKUPS_SEM}
NB_BACKUPS_MOIS=${borg_NB_BACKUPS_MOIS}
# Le Repertoire ou sont les pré traitement
BORGSCRIPTS=${borg_BORGSCRIPTS}
BORGLOG="${borg_BORGLOG}/BorgBackup-$(date +%d-%m-%Y-%H-%M-%S).log"
DEBUG=false
#####################################################
#
FICLOG="/var/log/${PRG}.log"
#####################################################
trap 'LogFic "script stoppé sur un SIGTERM ou SIGINT" >&2; exit 2' INT TERM
LogFic() {
[ ! -w ${FICLOG} ] && { echo "Probleme d' ecriture dans $FICLOG" ; exit 1 ;}
echo "$(date +%d-%m-%Y-%H-%M-%S) : $1" >> ${FICLOG}
}
#
ExpMail() {
MAIL_SOURCE=$1
MAIL_SUJET=$2
MAIL_DEST=$3
MAIL_TEXTE=$4
# a mettre ailleurs
mailexp=${borg_MAILEXP}
mailpassword=${borg_MAILPASSWORD}
mailserveur=${borg_MAILSERVEUR}
#
#sendemail -t ${MAIL_DEST} -u ${MAIL_SUJET} -m ${MAIL_TEXTE} -f $mailexp -s $mailserveur:587 -xu $mailexp -xp $mailpassword -o tls=yes >/dev/null 2>&1
printf "Subject:${MAIL_SUJET}\n${MAIL_TEXTE}" | msmtp ${MAIL_DEST}
#docker exec -i mailServ mailx -a 'Content-Type: text/plain; charset="UTF-8"' -r ${MAIL_SOURCE} -s "${MAIL_SUJET}" ${MAIL_DEST} << EOF
#${MAIL_TEXTE}
#EOF
}
Pre_Sauvegarde() {
if [ -d ${BORGSCRIPTS} ]
then
cd ${BORGSCRIPTS}
for FicPre in $(ls )
do
if [ -x ${FicPre} ] && [ $(echo ${FicPre} | grep -i ^pre_) ]
then
LogFic " - Pré traitement de la sauvegarde : ${FicPre}"
[ "$DEBUG" = true ] && echo " - Pré traitement de la sauvegarde : ${FicPre}"
./${FicPre}
fi
done
fi
}
Post_Sauvegarde() {
if [ -d ${BORGSCRIPTS} ]
then
cd ${BORGSCRIPTS}
for FicPre in $(ls )
do
if [ -x ${FicPre} ] && [ $(echo ${FicPre} | grep -i ^post_) ]
then
LogFic " - Post traitement de la sauvegarde : ${FicPre}"
[ "$DEBUG" = true ] && echo " - Post traitement de la sauvegarde : ${FicPre}"
./${FicPre}
fi
done
fi
}
Sauvegarde() {
Pre_Sauvegarde
BACKUP_PRE=$?
borg create \
--filter AME \
--exclude-caches \
--stats \
--show-rc \
--exclude 'home/*/.cache/*' \
--exclude 'var/tmp/*' \
::$(date +%Y-%m-%d-%H-%M-%S-%A)-{hostname} \
${LISTREPSAUV} >>${BORGLOG} 2>>${BORGLOG}
BACKUP_EXIT=$?
Post_Sauvegarde
BACKUP_POST=$?
}
Purge() {
borg prune \
--prefix '{hostname}-' \
--keep-daily ${NB_BACKUPS_JOUR} \
--keep-weekly ${NB_BACKUPS_SEM} \
--keep-monthly ${NB_BACKUPS_MOIS} \
PRUNE_EXIT=$?
}
Compact() {
borg compact --progress ${BORG_REPO}
}
usage() {
echo "-h : Usage"
echo "-c : Permet de compacter ${BORG_REPO}"
echo "-d : Permet de verifier les variables de sauvegarde"
echo "-i : Mode interractif"
echo "-l : Liste les sauvegardes sans monter ${BORG_REPO}"
echo "-m : Monte le REPO (${BORG_REPO} sur ${BORGMOUNT})"
echo "-p : Permet de lancer la phase de purge des backup en fonctions des variables: jour=${NB_BACKUPS_JOUR},semaine=${NB_BACKUPS_SEM},mois=${NB_BACKUPS_MOIS}"
echo "-s : Lance la sauvegarde"
echo "-u : Demonte le REPO (${BORG_REPO} de ${BORGMOUNT})"
echo "-v : Version"
exit
}
Borgvariables() {
echo "-----------------------------------------------------------"
echo " Variables applicatives pour le site ${site}"
echo "-----------------------------------------------------------"
for borgvar in $(set | grep borg_ | sed -e 's/borg_//' -e 's/=.*$//' | grep ^[A-Z])
do
echo "$borgvar=${!borgvar}"
done
if grep borgfs /etc/mtab >/dev/null 2>&1
then
echo -e "${RED}WARNING${NC}: ${BORG_REPO} est monté sur ${BORGMOUNT}"
fi
}
Borgmount() {
LogFic "Montage du repo ${BORG_REPO} sur ${BORGMOUNT} .. "
echo -en "Montage du repo ${BORG_REPO} sur ${BORGMOUNT} .. "
borg mount ${BORG_REPO} ${BORGMOUNT} >/dev/null 2>&1
if [ $? = 0 ]
then
LogFic "Ok"
echo -e "${GREEN}Ok${NC}"
else
LogFic "Error"
echo -e "${RED}Error $?${NC}"
fi
exit
}
Borgumount() {
LogFic "Demontage du repo ${BORG_REPO} sur ${BORGMOUNT} .. "
echo -en "Demontage du repo ${BORG_REPO} sur ${BORGMOUNT} .. "
borg umount ${BORGMOUNT} >/dev/null 2>&1
if [ $? = 0 ]
then
LogFic "Ok"
echo -e "${GREEN}Ok${NC}"
else
LogFic "Error"
echo -e "${RED}Error $?${NC}"
fi
exit
}
Borglist() {
LogFic "Borg list demandé"
borg list --short ${BORG_REPO}
exit
}
main() {
# ****************************************************** Main *******************************************************************
# Création du fichier de log
touch ${FICLOG}
type -P sendemail || { echo "sendemail non trouvé";exit 1;}
#
LogFic "#########################################################################################################################"
LogFic " *************** ${PRG} Version ${VERSION} ***************"
LogFic "#########################################################################################################################"
# test si les variables importantes sont renseignées et sortie si tel n' est pas le cas
if [ -z "${VOLUME_SAUVEGARDES}" ] && [ -z "${BORG_REPO}" ] || [ -z "${BORG_REPO}" ] || [ -z "${BORG_PASSPHRASE}" ] || [ -z "${MAIL_RAPPORT}" ]
then
echo "Les variables VOLUME_SAUVEGARDES, BORG_REPO, BORG_PASSPHRASE, MAIL_RAPPORT sont à verifier"
LogFic "Les variables VOLUME_SAUVEGARDES, BORG_REPO, BORG_PASSPHRASE, MAIL_RAPPORT sont à verifier"
LogFic "Sortie du script"
exit 1
fi
# test si le volume de sauvegarde est ok
if [ ! -z ${VOLUME_SAUVEGARDES} ]
then
[ !$(grep "${VOLUME_SAUVEGARDES}" /etc/mtab >/dev/null 2>&1) ] || { echo "le volume de sauvegarde ${VOLUME_SAUVEGARDES} n' est pas monté"; LogFic "Erreur de montage du volume ${VOLUME_SAUVEGARDES} de sauvegarde" ; exit 1;}
else
[ ! $(echo ${BORG_REPO} | grep -i ssh 2>/dev/null) ] && { echo "Problème avec le repo distant ";exit 1;}
fi
# Test si le REPO est monté : on sort
if grep borgfs /etc/mtab >/dev/null 2>&1
then
echo "le REPO : ${BORG_REPO} est monté , je sors"
LogFic "le REPO : ${BORG_REPO} est monté , je sors"
ExpMail borg@${domain} "${site} : Sauvegarde en Erreur !!!!" ${MAIL_RAPPORT} "le REPO : ${BORG_REPO} est monté, sauvegarde impossible"
exit 1
fi
# Tout se passe bien on continue
LogFic " - Repertoire a sauver : ${LISTREPSAUV}"
LogFic " - Volume Nfs monté : ${VOLUME_SAUVEGARDES}"
LogFic " - Repertoire des sauvegardes : ${BORG_REPO}"
[ ! -d ${BORGSCRIPTS} ] && LogFic "Pas de repertoire de PRE et POST" || LogFic " - Repertoire des scripts Post/Pré : ${BORGSCRIPTS}"
[ "${DEBUG}" = true ] && [ -d ${BORGSCRIPTS} ] && echo "Rep des scripts PRE/POST :${BORGSCRIPTS}"
LogFic " - Rapport par Mail : ${MAIL_RAPPORT}"
LogFic " - Backups jour : ${NB_BACKUPS_JOUR} , Backups semaines : ${NB_BACKUPS_SEM} , Backups mois : ${NB_BACKUPS_MOIS}"
[ "${DEBUG}" = true ] && echo "${LISTREPSAUV} sauvé dans ${BORG_REPO}, Rapport vers : ${MAIL_RAPPORT}"
LogFic "#########################################################################################################################"
LogFic " - Démarrage de la sauvegarde"
[ "$DEBUG" = true ] && echo "Demarrage de la sauvegarde : "
LogFic " - Log dans ${BORGLOG}"
Sauvegarde
[ "$DEBUG" = true ] && echo "code de retour de backup : ${BACKUP_EXIT}"
LogFic " - Code de retour de la commande sauvegarde : ${BACKUP_EXIT}"
LogFic " - Démarrage du nettoyage des sauvegardes"
[ "$DEBUG" = true ] && echo "Nettoyage des sauvegardes: "
Purge
LogFic " - Code retour du Nettoyage des sauvegardes (0=OK; 1=WARNING, 2=ERROR) : ${PRUNE_EXIT}"
[ "$DEBUG" = true ] && echo "code de retour prune : ${PRUNE_EXIT}"
#
########################################################################################
# si la variable MAILDETAIL est true alors on affecte le contenu du log sinon LOGDATA est VIDE
LOGDATA=""
[ "$MAILDETAIL" = true ] && LOGDATA=$(cat ${BORGLOG})
[ "$DEBUG" = true ] && [ "$MAILDETAIL" = true ] && echo "Envoi du mail à ${MAIL_RAPPORT}"
# On teste le code retour de la sauvegarde, on log et on envoie des mails
case "${BACKUP_EXIT}" in
'0' )
IFS=''
MESS_SAUVE_OK="
Salut
La sauvegarde est ok, ce message peut être enlevé avec la variable MAILOK=false
Que la force soit avec toi
BorgBackup
"
LogFic " - la sauvegarde est OK"
[ "$MAILOK" = true ] && ExpMail borg@${domain} "${site} : Sauvegarde Ok" ${MAIL_RAPPORT} ${MESS_SAUVE_OK}${LOGDATA}
IFS=' '
;;
'1' )
IFS=''
MESS_SAUVE_ERR="
Salut
La sauvegarde est en warning
Code de retour de la commande sauvegarde : ${BACKUP_EXIT}
Le log contenant les infos est ${BORGLOG}
BorgBackup
"
LogFic " - Sauvegarde en Warning: ${BACKUP_EXIT}"
[ "$MAILWARNING" = true ] && ExpMail borg@${domain} "${site} : Sauvegarde en Warning: ${BACKUP_EXIT}" ${MAIL_RAPPORT} ${MESS_SAUVE_ERR}${LOGDATA}
IFS=' '
;;
* )
IFS=''
MESS_SAUVE_ERR="
Salut
La sauvegarde est en Erreur
Code de retour de la commande sauvegarde : ${BACKUP_EXIT}
Le log à consulter est ${BORGLOG}
BorgBackup
"
LogFic " - !!!!! Sauvegarde en Erreur !!!!! : ${BACKUP_EXIT}"
ExpMail borg@${domain} "${site} : Sauvegarde en Erreur !!!! : ${BACKUP_EXIT}" ${MAIL_RAPPORT} ${MESS_SAUVE_ERR}${LOGDATA}
IFS=' '
;;
esac
LogFic " - Fin de la sauvegarde"
exit
}
[ ! "$#" -eq "0" ] || usage
# On teste les arguments pour le script
for ARG in $@; do
case "${ARG}" in
'-h' | '-help' )
usage
;;
'-m' )
shift
Borgmount
;;
'-u' )
shift
Borgumount
;;
'-l' )
shift
Borglist
;;
'-i' )
shift
DEBUG=true
;;
'-v' )
shift
echo "Version : ${VERSION}"
exit
;;
'-d' )
shift
Borgvariables
exit
;;
'-c' )
shift
Compact
exit
;;
'-s' )
main
;;
'-p' )
shift
read -p "Ok pour lancer la purge en fonction de ces valeurs : jour=${NB_BACKUPS_JOUR},semaine=${NB_BACKUPS_SEM},mois=${NB_BACKUPS_MOIS} ? O/N : " READPURGE
[[ ${READPURGE} =~ ^[oO]$ ]] && Purge || echo "pas de purge"
exit
;;
* | ' ' )
usage
;;
esac
done

161
bin/scriptSauve.sh Executable file
View File

@ -0,0 +1,161 @@
#!/bin/bash
# Didier le 14 Septembre 2022
#
# TODO : Inclure un script post et pre.
#
#####################################################
#KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
KAZ_ROOT="/kaz"
. $KAZ_ROOT/bin/.commonFunctions.sh
setKazVars
. $DOCKERS_ENV
. $KAZ_ROOT/secret/SetAllPass.sh
VERSION="1.0"
PRG=$(basename $0)
RACINE=$(echo $PRG | awk '{print $1}')
IFS=' '
#
#####################################################
MAILSOURCE="sauve@kaz.bzh"
VOLUME_SAUVEGARDES="/mnt/backup-nas1"
#SAUVE_REPO=${VOLUME_SAUVEGARDES}/SauveRepo
SAUVE_REPO=admin@nas-kaz1:/share/Backup/SauveRepo
MAIL_RAPPORT="didier@kaz.bzh;francois@kaz.bzh;fab@kaz.bzh;fanch@kaz.bzh"
#####################################################
SCRIPTLOG="/mnt/disk-nas1/log/${PRG}-$(date +%d-%m-%Y-%H-%M-%S).log"
FICLOG="/var/log/${PRG}.log"
#####################################################
# - la liste des repertoires à sauver séparés par un espace
LISTREPSAUV="/var/lib/docker/volumes /kaz"
#####################################################
# - Les sauvegardes à garder jour, semaines, mois
NB_BACKUPS_JOUR=15
NB_BACKUPS_SEM=8
NB_BACKUPS_MOIS=12
#####################################################
# Recevoir un mail quand la sauvegarde est OK ?
MAILOK=true
#####################################################
trap 'LogFic "script stoppé sur un SIGTERM ou SIGINT" >&2; exit 2' INT TERM
LogFic() {
[ ! -w ${FICLOG} ] && { echo "Probleme d' ecriture dans $FICLOG" ; exit 1 ;}
echo "$(date +%d-%m-%Y-%H-%M-%S) : $1" >> ${FICLOG}
}
#
ExpMail() {
MAIL_SOURCE=$1
MAIL_SUJET=$2
MAIL_DEST=$3
MAIL_TEXTE=$4
docker exec -i mailServ mailx -a 'Content-Type: text/plain; charset="UTF-8"' -r ${MAIL_SOURCE} -s "${MAIL_SUJET}" ${MAIL_DEST} << EOF
${MAIL_TEXTE}
EOF
}
Sauvegarde() {
#$1 est le repertoire à sauver, on créé le sous repertoire dans le repo
CODE_TMP=""
if [ -r $1 ]
then
echo "Sauvegarde $1" >>${SCRIPTLOG}
#mkdir -p ${SAUVE_REPO}/$1 >/dev/null 2>&1
#rdiff-backup --verbosity 3 $1 ${SAUVE_REPO}/$1 >>${SCRIPTLOG} 2>>${SCRIPTLOG}
rsync -aAHXh --del --stats --exclude 'files_trashbin' $1 ${SAUVE_REPO} >>${SCRIPTLOG} 2>>${SCRIPTLOG}
CODE_TMP=$?
else
LogFic "$1 n' existe pas ou n' est pas accessible en lecture"
CODE_TMP=1
fi
LogFic "Code Retour de la sauvegarde de $1 : ${CODE_TMP}"
BACKUP_EXIT=$(expr ${BACKUP_EXIT} + ${CODE_TMP} )
}
#
Purge() {
echo "Commande prune de rdiff-backup"
PRUNE_EXIT=$?
}
# ****************************************************** Main *******************************************************************
# Création du fichier de log
touch ${FICLOG}
#
LogFic "#########################################################################################################################"
LogFic " *************** ${PRG} Version ${VERSION} ***************"
LogFic "#########################################################################################################################"
# test si les variables importantes sont renseignées et sortie si tel n' est pas le cas
if [ -z "${VOLUME_SAUVEGARDES}" ] || [ -z "${SAUVE_REPO}" ]
then
echo "VOLUME_SAUVEGARDES et SAUVE_REPO à verifier"
LogFic "VOLUME_SAUVEGARDES et SAUVE_REPO à verifier"
LogFic "Sortie du script"
exit 1
fi
#####################################################################################################################################################
################### Mise en commentaire de cette section puisque le repo est en rsync ( voir plus tard comment gérer ça )
# test si le volume de sauvegarde est ok
#grep "${VOLUME_SAUVEGARDES}" /etc/mtab >/dev/null 2>&1
#if [ "$?" -ne "0" ]
#then
# echo "le volume de sauvegarde ${VOLUME_SAUVEGARDES} n' est pas monté"
# LogFic "Erreur de montage du volume ${VOLUME_SAUVEGARDES} de sauvegarde"
# exit 1
#fi
# Test si j' ai le droit d' écrire dans le Repo
# [ ! -w ${SAUVE_REPO} ] && { echo "Verifier le droit d' écriture dans ${SAUVE_REPO}" ; LogFic "Verifier le droit d' écriture dans ${SAUVE_REPO}"; exit 1;}
#####################################################################################################################################################
# Tout se passe bien on continue
LogFic " - Repertoire a sauver : ${LISTREPSAUV}"
#LogFic " - Volume Nfs monté : ${VOLUME_SAUVEGARDES}"
LogFic " - Destination des sauvegardes : ${SAUVE_REPO}"
LogFic " - Rapport par Mail : ${MAIL_RAPPORT}"
#LogFic " - Backups jour : ${NB_BACKUPS_JOUR} , Backups semaines : ${NB_BACKUPS_SEM} , Backups mois : ${NB_BACKUPS_MOIS}"
LogFic "#########################################################################################################################"
LogFic " - Démarrage de la sauvegarde"
LogFic " - Log dans ${SCRIPTLOG}"
BACKUP_EXIT=0
PRUNE_EXIT=0
for REPS in ${LISTREPSAUV}
do
LogFic "Sauvegarde de ${REPS}"
Sauvegarde ${REPS}
done
LogFic "Code retour compilé de toutes les sauvegardes : ${BACKUP_EXIT}"
################################## a gérer plus tard
#LogFic " - Démarrage du nettoyage des sauvegardes"
#Purge
#LogFic " - Code retour du Nettoyage des sauvegardes (0=OK; 1=WARNING, 2=ERROR) : ${PRUNE_EXIT}"
#
########################################################################################
# On teste le code retour de la sauvegarde, on log et on envoie des mails
case "${BACKUP_EXIT}" in
'0' )
IFS=''
MESS_SAUVE_OK="
Salut
La sauvegarde est ok, ce message peut être enlevé avec la variable MAILOK=false
Que la force soit avec toi
Ton esclave des sauvegardes"
LogFic " - la sauvegarde est OK"
[ "$MAILOK" = true ] && ExpMail ${MAILSOURCE} "Sauvegarde Ok" ${MAIL_RAPPORT} ${MESS_SAUVE_OK}
IFS=' '
;;
* )
IFS=''
MESS_SAUVE_ERR="
Salut
La sauvegarde est en Erreur
Le log à consulter est ${SCRIPTLOG}
Code retour de la Sauvegarde ( code Rsync ): ${BACKUP_EXIT}
Ton esclave des sauvegardes"
LogFic " - !!!!! Sauvegarde en Erreur !!!!!"
ExpMail ${MAILSOURCE} "!!!! Sauvegarde en Erreur !!!!" ${MAIL_RAPPORT} ${MESS_SAUVE_ERR}
IFS=' '
;;
esac
LogFic " - Fin de la sauvegarde"

72
bin/secretGen.sh Executable file
View File

@ -0,0 +1,72 @@
#!/bin/bash
KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
cd "${KAZ_ROOT}"
NEW_DIR="secret"
TMPL_DIR="secret.tmpl"
if [ ! -d "${NEW_DIR}/" ]; then
rsync -a "${TMPL_DIR}/" "${NEW_DIR}/"
fi
NEW_FILE="${NEW_DIR}/SetAllPass-new.sh"
TMPL_FILE="${NEW_DIR}/SetAllPass.sh"
while read line ; do
if [[ "${line}" =~ ^# ]] || [ -z "${line}" ] ; then
echo "${line}"
continue
fi
if [[ "${line}" =~ "--clean_val--" ]] ; then
case "${line}" in
*jirafeau_DATA_DIR*)
JIRAFEAU_DIR=$(getValInFile "${DOCKERS_ENV}" "jirafeauDir")
[ -z "${JIRAFEAU_DIR}" ] &&
echo "${line}" ||
sed "s%\(.*\)--clean_val--\(.*\)%\1${JIRAFEAU_DIR}\2%" <<< ${line}
continue
;;
*DATABASE*)
dbName="$(sed "s/\([^_]*\)_.*/\1/" <<< ${line})_$(apg -n 1 -m 2 -M NCL | cut -c 1-2)"
sed "s/\(.*\)--clean_val--\(.*\)/\1${dbName}\2/" <<< ${line}
continue
;;
*ROOT_PASSWORD*|*PASSWORD*)
pass="$(apg -n 1 -m 16 -M NCL)"
sed "s/\(.*\)--clean_val--\(.*\)/\1${pass}\2/" <<< ${line}
continue
;;
*USER*)
user="$(sed "s/\([^_]*\)_.*/\1/" <<< ${line})_$(apg -n 1 -m 2 -M NCL | cut -c 1-2)"
sed "s/\(.*\)--clean_val--\(.*\)/\1${user}\2/" <<< ${line}
continue
;;
*RAIN_LOOP*|*office_password*|*mattermost_*|*sympa_*|*gitea_*)
pass="$(apg -n 1 -m 16 -M NCL)"
sed "s/\(.*\)--clean_val--\(.*\)/\1${pass}\2/" <<< ${line}
continue
;;
*vaultwarden_ADMIN_TOKEN*)
pass="$(apg -n 1 -m 32 -M NCL)"
sed "s/\(.*\)--clean_val--\(.*\)/\1${pass}\2/" <<< ${line}
continue
;;
esac
else
echo "${line}"
continue
fi
printKazError "${line}" >&2
done < "${TMPL_FILE}" > "${NEW_FILE}"
mv "${NEW_FILE}" "${TMPL_FILE}"
chmod a+x "${TMPL_FILE}"
. "${TMPL_FILE}"
"${KAZ_BIN_DIR}/updateDockerPassword.sh"
exit 0

58
bin/setOwner.sh Executable file
View File

@ -0,0 +1,58 @@
#!/bin/bash
cd $(dirname $0)/..
KAZ=$(pwd)
owner=root
usage(){
echo "Usage: $0 [root|user]"
exit 1
}
case $# in
0)
;;
1)
owner=$1
;;
*)
usage
;;
esac
####################
# config
cd ${KAZ}
DIRS="config secret bin"
chown -hR ${owner}: ${DIRS}
find ${DIRS} -type f -exec chmod a-x {} \;
find ${DIRS} -type f -name \*.sh -exec chmod a+x {} \;
chmod -R a+X ${DIRS}
chmod -R go= ${DIRS}
chmod a+x bin/*.sh
chown -hR www-data: config/orgaTmpl/wiki-conf/
####################
# dockers
cd ${KAZ}/dockers
chown -h ${owner}: . * */.env */* */config/*
chmod a-x,a+r * */*
chmod a+X . * */*
chmod a+x */*.sh
chown -hR ${owner}: \
etherpad/etherpad-lite/ \
paheko/extensions paheko/paheko-* \
jirafeau/Jirafeau \
mattermost/app
chown -hR www-data: \
vigilo \
web/html
chmod -R a+rX web/html

13
bin/updateAllOrga.sh Executable file
View File

@ -0,0 +1,13 @@
#!/bin/bash
KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
cd "${KAZ_COMP_DIR}"
for orga in *-orga
do
${orga}/orga-gen.sh
"${KAZ_ROOT}/bin/container.sh" stop "${orga}"
"${KAZ_ROOT}/bin/container.sh" start "${orga}"
done

121
bin/updateDockerPassword.sh Executable file
View File

@ -0,0 +1,121 @@
#!/bin/bash
KAZ_ROOT=$(cd $(dirname $0)/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
# pour mise au point
# SIMU=echo
# Améliorations à prévoir
# - donner en paramètre les services concernés (pour limité les modifications)
# - pour les DB si on déclare un nouveau login, alors les privilèges sont créé mais les anciens pas révoqués
. "${DOCKERS_ENV}"
. "${KAZ_KEY_DIR}/SetAllPass.sh"
updateEnvDB(){
# $1 = prefix
# $2 = envName
# $3 = containerName of DB
rootPass="$1_MYSQL_ROOT_PASSWORD"
dbName="$1_MYSQL_DATABASE"
userName="$1_MYSQL_USER"
userPass="$1_MYSQL_PASSWORD"
${SIMU} sed -i \
-e "s/MYSQL_ROOT_PASSWORD=.*/MYSQL_ROOT_PASSWORD=${!rootPass}/g" \
-e "s/MYSQL_DATABASE=.*/MYSQL_DATABASE=${!dbName}/g" \
-e "s/MYSQL_USER=.*/MYSQL_USER=${!userName}/g" \
-e "s/MYSQL_PASSWORD=.*/MYSQL_PASSWORD=${!userPass}/g" \
"$2"
# seulement si pas de mdp pour root
# pb oeuf et poule (il faudrait les anciennes valeurs) :
# * si rootPass change, faire à la main
# * si dbName change, faire à la main
checkDockerRunning "$3" "$3" || return
echo "change DB pass on docker $3"
echo "grant all privileges on ${!dbName}.* to '${!userName}' identified by '${!userPass}';" | \
docker exec -i $3 bash -c "mysql --user=root --password=${!rootPass}"
}
updateEnv(){
# $1 = prefix
# $2 = envName
for varName in $(grep "^[a-zA-Z_]*=" $2 | sed "s/^\([^=]*\)=.*/\1/g")
do
srcName="$1_${varName}"
srcVal=$(echo "${!srcName}" | sed -e "s/[&]/\\\&/g")
${SIMU} sed -i \
-e "s%^[ ]*${varName}=.*\$%${varName}=${srcVal}%" \
"$2"
done
}
framadateUpdate(){
[[ "${COMP_ENABLE}" =~ " framadate " ]] || return
if [ ! -f "${DOCK_LIB}/volumes/framadate_dateConfig/_data/config.php" ]; then
return 0
fi
checkDockerRunning "${framadateServName}" "Framadate" &&
${SIMU} docker exec -ti "${framadateServName}" bash -c -i "htpasswd -bc /var/framadate/admin/.htpasswd ${framadate_HTTPD_USER} ${framadate_HTTPD_PASSWORD}"
${SIMU} sed -i \
-e "s/^#*const DB_USER[ ]*=.*$/const DB_USER= '${framadate_MYSQL_USER}';/g" \
-e "s/^#*const DB_PASSWORD[ ]*=.*$/const DB_PASSWORD= '${framadate_MYSQL_PASSWORD}';/g" \
"${DOCK_LIB}/volumes/framadate_dateConfig/_data/config.php"
}
jirafeauUpdate(){
[[ "${COMP_ENABLE}" =~ " jirafeau " ]] || return
if [ ! -f "${DOCK_LIB}/volumes/jirafeau_fileConfig/_data/config.local.php" ]; then
return 0
fi
SHA=$(echo -n "${jirafeau_HTTPD_PASSWORD}" | sha256sum | cut -d \ -f 1)
${SIMU} sed -i \
-e "s/'admin_password'[ ]*=>[ ]*'[^']*'/'admin_password' => '${SHA}'/g" \
"${DOCK_LIB}/volumes/jirafeau_fileConfig/_data/config.local.php"
}
####################
# main
updateEnvDB "etherpad" "${KAZ_KEY_DIR}/env-${etherpadDBName}" "${etherpadDBName}"
updateEnvDB "framadate" "${KAZ_KEY_DIR}/env-${framadateDBName}" "${framadateDBName}"
updateEnvDB "gitea" "${KAZ_KEY_DIR}/env-${gitDBName}" "${gitDBName}"
updateEnvDB "mattermost" "${KAZ_KEY_DIR}/env-${mattermostDBName}" "${mattermostDBName}"
updateEnvDB "nextcloud" "${KAZ_KEY_DIR}/env-${nextcloudDBName}" "${nextcloudDBName}"
updateEnvDB "roundcube" "${KAZ_KEY_DIR}/env-${roundcubeDBName}" "${roundcubeDBName}"
updateEnvDB "sympa" "${KAZ_KEY_DIR}/env-${sympaDBName}" "${sympaDBName}"
updateEnvDB "vigilo" "${KAZ_KEY_DIR}/env-${vigiloDBName}" "${vigiloDBName}"
updateEnvDB "wp" "${KAZ_KEY_DIR}/env-${wordpressDBName}" "${wordpressDBName}"
updateEnvDB "vaultwarden" "${KAZ_KEY_DIR}/env-${vaultwardenDBName}" "${vaultwardenDBName}"
updateEnvDB "castopod" "${KAZ_KEY_DIR}/env-${castopodDBName}" "${castopodDBName}"
updateEnv "apikaz" "${KAZ_KEY_DIR}/env-${apikazServName}"
updateEnv "ethercalc" "${KAZ_KEY_DIR}/env-${ethercalcServName}"
updateEnv "etherpad" "${KAZ_KEY_DIR}/env-${etherpadServName}"
updateEnv "framadate" "${KAZ_KEY_DIR}/env-${framadateServName}"
updateEnv "gandi" "${KAZ_KEY_DIR}/env-gandi"
updateEnv "gitea" "${KAZ_KEY_DIR}/env-${gitServName}"
updateEnv "jirafeau" "${KAZ_KEY_DIR}/env-${jirafeauServName}"
updateEnv "mattermost" "${KAZ_KEY_DIR}/env-${mattermostServName}"
updateEnv "nextcloud" "${KAZ_KEY_DIR}/env-${nextcloudServName}"
updateEnv "office" "${KAZ_KEY_DIR}/env-${officeServName}"
updateEnv "roundcube" "${KAZ_KEY_DIR}/env-${roundcubeServName}"
updateEnv "vigilo" "${KAZ_KEY_DIR}/env-${vigiloServName}"
updateEnv "wp" "${KAZ_KEY_DIR}/env-${wordpressServName}"
updateEnv "ldap" "${KAZ_KEY_DIR}/env-${ldapServName}"
updateEnv "sympa" "${KAZ_KEY_DIR}/env-${sympaServName}"
updateEnv "mail" "${KAZ_KEY_DIR}/env-${smtpServName}"
updateEnv "mobilizon" "${KAZ_KEY_DIR}/env-${mobilizonServName}"
updateEnv "mobilizon" "${KAZ_KEY_DIR}/env-${mobilizonDBName}"
updateEnv "vaultwarden" "${KAZ_KEY_DIR}/env-${vaultwardenServName}"
updateEnv "castopod" "${KAZ_KEY_DIR}/env-${castopodServName}"
updateEnv "ldap" "${KAZ_KEY_DIR}/env-${ldapUIName}"
framadateUpdate
jirafeauUpdate
exit 0

447
bin/updateGit.sh Executable file
View File

@ -0,0 +1,447 @@
#!/bin/bash
# l'idée et de faire un rsync dans un répertoir provisoire et de téléverser les différences.
# initialilisation :
# cd /MonRepDeTest
# mkdir -p kazdev kazprod
# rsync -rlptDEHAX --delete --info=progress2 root@kazdev:/kaz/ ./kazdev/
# rsync -rlptDEHAX --delete --info=progress2 root@kazprod:/kaz/ ./kazprod/
# exemple :
# cd /MonRepDeTest/kazdev/
# ./dockers/rdiff.sh /MonRepDeTest/kazprod/ root@kazprod
# cd /MonRepDeTest/kazprod/
# ./dockers/rdiff.sh /MonRepDeTest/kazdev/ root@kazdev
export KAZ_ROOT=$(cd "$(dirname $0)/.."; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
export REF_DIRS="bin config dockers secret.tmpl"
export SIMU=""
usage () {
echo "Usage: $0 [-n] [-h]"
echo " -h help"
echo " -n simulation"
exit 1
}
for ARG in $@
do
case "${ARG}" in
'-h' | '-help' )
usage
;;
'-n' )
shift
export SIMU="echo"
;;
esac
done
if [[ $# -ne 0 ]]; then
echo "Illegal number of parameters"
usage
fi
########################################
# check system
for prg in git ssh rsync kompare; do
if ! type "${prg}" > /dev/null; then
printKazError "$0 need ${prg}"
echo "please run \"apt-get install ${prg}\""
exit
fi
done
########################################
# config
declare -a SKIP_FILE
export SKIP_FILE=$(grep -v -e ^# -e ^$ "${KAZ_CONF_DIR}/skip-file.txt")
KAZ_CFG_UPDATED=""
KAZ_UPDATE_ENV="${KAZ_CONF_DIR}/updateGit.conf"
if [ -f "${KAZ_UPDATE_ENV}" ]; then
. "${KAZ_UPDATE_ENV}"
else
KAZ_SRC_TYPE="VAGRANT"
KAZ_VAGRANT_ROOT="~/kaz-vagrant"
KAZ_DEV_REMOTE="root@kazdev"
KAZ_DEV_ROOT="/kaz"
KAZ_PROD_REMOTE="root@kazprod"
KAZ_PROD_ROOT="/kaz"
KAZ_OTHER_REMOTE="192.168.1.1"
KAZ_OTHER_ROOT="~/git/kaz"
fi
while : ; do
read -p " Form which tested server ([Vagrant|DEV|PROD|OTHER]) you want updaye git KAZ? [${KAZ_SRC_TYPE}]: " rep
case "${rep}" in
"")
break
;;
[vV]*)
KAZ_SRC_TYPE="VAGRANT"
KAZ_CFG_UPDATED="true"
break
;;
[dD]*)
KAZ_SRC_TYPE="DEV"
KAZ_CFG_UPDATED="true"
break
;;
[pP]*)
KAZ_SRC_TYPE="PROD"
KAZ_CFG_UPDATED="true"
break
;;
[oO]*)
KAZ_SRC_TYPE="OTHER"
KAZ_CFG_UPDATED="true"
break
;;
* )
printKazError "\"${rep}\" not match with [Vagrant|DEV|PROD|OTHER]."
;;
esac
done
case "${KAZ_SRC_TYPE}" in
VAGRANT)
while : ; do
read -p " Give kaz-vagrant root? [${KAZ_VAGRANT_ROOT}]: " vagrantPath
if [ -z "${vagrantPath}" ]; then
vagrantPath="${KAZ_VAGRANT_ROOT}"
else
KAZ_CFG_UPDATED="true"
fi
if [ ! -d "${vagrantPath/#\~/${HOME}}" ]; then
printKazError "${vagrantPath} doesn't exist"
continue
fi
KAZ_VAGRANT_ROOT="${vagrantPath}"
KAZ_VAGRANT_PAHT="$(cd "${vagrantPath/#\~/${HOME}}" 2>/dev/null; pwd)"
(for sign in .git .vagrant; do
if [ ! -d "${KAZ_VAGRANT_PAHT}/${sign}" ]; then
printKazError "${KAZ_VAGRANT_PAHT} not contains ${sign}"
exit 1
fi
done
exit 0
) && break;
done
;;
DEV|PROD|OTHER)
case "${KAZ_SRC_TYPE}" in
DEV)
remoteUser="${KAZ_DEV_REMOTE}"
remotePath="${KAZ_DEV_ROOT}"
;;
PROD)
remoteUser="${KAZ_PROD_REMOTE}"
remotePath="${KAZ_PROD_ROOT}"
;;
OTHER)
remoteUser="${KAZ_OTHER_REMOTE}"
remotePath="${KAZ_OTHER_ROOT}"
;;
esac
while : ; do
read -p "Give remote access? [${remoteUser}]: " rep
case "${rep}" in
"" )
break
;;
* )
if [[ "${rep}" =~ ^([a-zA-Z0-9._%+-]+@)?[a-zA-Z0-9.-]+$ ]]; then
remoteUser="${rep}"
break
else
printKazError "${rep} not match with [user@server]"
fi
;;
esac
done
while : ; do
read -p "Give remote path? [${remotePath}]: " rep
case "${rep}" in
"" )
break
;;
* )
if [[ "${rep}" =~ ^~?[a-zA-Z0-9/._-]*/$ ]]; then
remotePath="${rep}"
break
else
printKazError "${rep} not match with [path]"
fi
;;
esac
done
case "${KAZ_SRC_TYPE}" in
DEV)
if [ "${remoteUser}" != "${KAZ_DEV_REMOTE}" ]; then
KAZ_DEV_REMOTE="${remoteUser}"; KAZ_CFG_UPDATED="true"
fi
if [ "${remotePath}" != "${KAZ_DEV_ROOT}" ]; then
KAZ_DEV_ROOT="${remotePath}"; KAZ_CFG_UPDATED="true"
fi
;;
PROD)
if [ "${remoteUser}" != "${KAZ_PROD_REMOTE}" ]; then
KAZ_PROD_REMOTE="${remoteUser}"; KAZ_CFG_UPDATED="true"
fi
if [ "${remotePath}" != "${KAZ_PROD_ROOT}" ]; then
KAZ_PROD_ROOT="${remotePath}"; KAZ_CFG_UPDATED="true"
fi
;;
OTHER)
if [ "${remoteUser}" != "${KAZ_OTHER_REMOTE}" ]; then
KAZ_OTHER_REMOTE="${remoteUser}"; KAZ_CFG_UPDATED="true"
fi
if [ "${remotePath}" != "${KAZ_OTHER_ROOT}" ]; then
KAZ_OTHER_ROOT="${remotePath}"; KAZ_CFG_UPDATED="true"
fi
;;
esac
;;
esac
if [ -n "${KAZ_CFG_UPDATED}" ]; then
printKazMsg "Update ${KAZ_UPDATE_ENV}"
cat > "${KAZ_UPDATE_ENV}" <<EOF
KAZ_SRC_TYPE="${KAZ_SRC_TYPE}"
KAZ_VAGRANT_ROOT="${KAZ_VAGRANT_ROOT}"
KAZ_DEV_REMOTE="${KAZ_DEV_REMOTE}"
KAZ_DEV_ROOT="${KAZ_DEV_ROOT}"
KAZ_PROD_REMOTE="${KAZ_PROD_REMOTE}"
KAZ_PROD_ROOT="${KAZ_PROD_ROOT}"
KAZ_OTHER_REMOTE="${KAZ_OTHER_REMOTE}"
KAZ_OTHER_ROOT="${KAZ_OTHER_ROOT}"
EOF
fi
########################################
# check git/kaz
cd "${KAZ_ROOT}"
CURRENT_BRANCH="$(git branch | grep "*")"
if [ "${CURRENT_BRANCH}" == "* develop" ]; then
printKazMsg "You are on ${CURRENT_BRANCH}."
else
printKazError "You supposed to be on * develop"
checkContinue
fi
if [ "$(git status | grep "git restore")" ]; then
echo "You didn't commit previous change."
checkContinue
fi
########################################
# download valide source from Vagrant, DEV, PROD or OTHER
export TESTED_DIR="${KAZ_ROOT}/tmp/kaz"
mkdir -p "${TESTED_DIR}"
printKazMsg "Download from ${KAZ_SRC_TYPE} to ${TESTED_DIR}"
checkContinue
case "${KAZ_SRC_TYPE}" in
VAGRANT)
(
echo "check vagrant status (must be launch with vagrant)"
cd "${KAZ_VAGRANT_ROOT/#\~/${HOME}}"
while :; do
if [ -n "$(vagrant status | grep running)" ]; then
exit
fi
printKazMsg "Please start vagrant"
checkContinue
done
)
printKazMsg check key
while grep -q "@@@@@@@@@@" <<<"$(ssh -p 2222 -i ${KAZ_VAGRANT_ROOT/#\~/${HOME}}/.vagrant/machines/default/virtualbox/private_key vagrant@127.0.0.1 date 2>&1 >/dev/null)"; do
printKazError "ssh key has changed"
echo "you must call :"
echo "${YELLOW} ssh-keygen -f ~/.ssh/known_hosts -R \"[127.0.0.1]:2222\"${NC}"
checkContinue
done
# XXX remote root
${SIMU} rsync -rlptDEHAX --no-o --delete --info=progress2 \
-e "ssh -p 2222 -i ${KAZ_VAGRANT_ROOT/#\~/${HOME}}/.vagrant/machines/default/virtualbox/private_key" \
$(for i in ${REF_DIRS} git download ; do echo "vagrant@127.0.0.1:/kaz/$i" ; done) \
"${TESTED_DIR}"
;;
DEV|PROD|OTHER)
# remoteUser is already set
case "${KAZ_SRC_TYPE}" in
DEV)
remoteUser="${KAZ_DEV_REMOTE}"; remotePath="${KAZ_DEV_ROOT}"
;;
PROD)
remoteUser="${KAZ_PROD_REMOTE}"; remotePath="${KAZ_PROD_ROOT}"
;;
OTHER)
remoteUser="${KAZ_OTHER_REMOTE}"; remotePath="${KAZ_OTHER_ROOT}"
;;
esac
${SIMU} rsync -rlptDEHAX --no-o --delete --info=progress2 \
$(for i in ${REF_DIRS} ; do echo "${remoteUser}:${remotePath}$i" ; done) \
"${TESTED_DIR}"
;;
esac
cd "${TESTED_DIR}"
badName(){
[[ -z "$1" ]] && return 0
for item in ${SKIP_FILE[@]}; do
[[ "$1/" =~ "${item}" ]] && return 0
done
return 1
}
declare -a CHANGED_DIRS
CHANGED_DIRS=$(find ${REF_DIRS} -type d ! -exec /bin/test -d "${KAZ_ROOT}/{}" \; -print -prune)
for file in ${CHANGED_DIRS[@]}; do
if badName "${file}" ; then
echo SKIP ${file}
continue
fi
printKazMsg "New dir ${file}"
while true; do
read -p "Synchronize ${GREEN}${file}/${NC} to ${GREEN}${KAZ_ROOT}/${file}/${NC}? [y/n/i/help]: " yn
case $yn in
[Yy]*)
${SIMU} rsync -rlptDEHAX --info=progress2 "${file}/" "${KAZ_ROOT}/${file}/"
(cd "${KAZ_ROOT}" ; git add "${file}/" )
break
;;
""|[Nn]*)
break
;;
[Ii]*)
# add to skip
echo "${file}" >> "${KAZ_CONF_DIR}/skip-file.txt"
break
;;
*)
echo -e \
" yes: add all subdir ${file} in git\n" \
" no: don't update git\n" \
" ignore: never ask this question\n" \
" help: print this help"
;;
esac
done
done
declare -a NEW_FILES
NEW_FILES=$(find ${REF_DIRS} '(' -type d ! -exec /bin/test -d "${KAZ_ROOT}/{}" \; -prune ')' -o '(' -type f ! -exec /bin/test -f "${KAZ_ROOT}/{}" \; -print ')')
for file in ${NEW_FILES[@]}; do
if badName "${file}" ; then
echo SKIP ${file}
continue
fi
echo "New file ${file}"
while true; do
read -p "Synchronize ${GREEN}${file}${NC} to ${GREEN}${KAZ_ROOT}/${file}${NC}? [y/n/i/help]: " yn
case $yn in
[Yy]*)
${SIMU} rsync -rlptDEHAX --info=progress2 "${file}" "${KAZ_ROOT}/${file}"
(cd "${KAZ_ROOT}" ; git add "${file}" )
break
;;
[Nn]*)
break
;;
[Ii]*)
# add to skip
echo "${file}" >> "${KAZ_CONF_DIR}/skip-file.txt"
break
;;
*)
echo -e \
" yes: add all subdir ${file} in git\n" \
" no: don't update git\n" \
" ignore: never ask this question\n" \
" help: print this help"
;;
esac
done
done
trap 'rm -f "${TMPFILE}"' EXIT
export TMPFILE="$(mktemp)" || exit 1
CHANGED_FILES=$(find ${REF_DIRS} '(' -type d ! -exec /bin/test -d "${KAZ_ROOT}/{}" \; -prune ')' -o '(' -type f -exec /bin/test -f "${KAZ_ROOT}/{}" \; ! -exec cmp -s "{}" "${KAZ_ROOT}/{}" \; -print ')')
for file in ${CHANGED_FILES[@]} ; do
if badName "${file}" ; then
echo SKIP ${file}
continue
fi
echo "TEST ${file}"
kompare "${file}" "${KAZ_ROOT}/${file}"
if [ "${KAZ_ROOT}/${file}" -ot "${TMPFILE}" ]; then
echo "No change of ${KAZ_ROOT}/${file}"
continue
fi
chmod --reference="${file}" "${KAZ_ROOT}/${file}"
done
echo
while : ; do
read -p "Do you want to keep ${TESTED_DIR} to speed up next download? [yes]" rep
case "${rep}" in
""|[yYoO]* )
break
;;
[Nn]* )
rm -r "${TESTED_DIR}"
;;
* )
echo "Please answer yes no."
;;
esac
done
cd "${KAZ_ROOT}"
echo -e "\nThe git will now commit in ${CURRENT_BRANCH}"
checkContinue
git commit -a
echo -e "\nThe git will now pull in ${CURRENT_BRANCH}"
checkContinue
git pull
printKazError "\nCheck if any confict"
#XXX check conflict
echo -e "\nThe git will now push in ${CURRENT_BRANCH}"
checkContinue
git push
printKazMsg "\nYou have to logged on ${KAZ_SRC_TYPE}, and launch:\n"
echo -e \
" ssh root@host\n" \
" cd /kaz\n" \
" git reset --hard\n" \
" git pull"
printKazMsg "\nIf you want to promote in master branch:\n" \
echo -e \
" git checkout master\n" \
" git pull\n" \
" git merge develop\n" \
"${RED}check if confict${NC}\n" \
" git commit -a \n" \
" git push\n"

36
bin/updateLook.sh Executable file
View File

@ -0,0 +1,36 @@
#!/bin/bash
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
cd "${KAZ_BIN_DIR}/look"
THEMES=$(ls -F -- '.' | grep '/$' | sed 's%/%%' | tr '\n' '|' | sed 's%|$%%')
usage () {
echo "usage $0 {${THEMES}}"
exit
}
[ -z "$1" ] && usage
[ -d "$1" ] || usage
cd $1
docker cp kaz-tete.png jirafeauServ:/var/jirafeau/media/kaz/kaz.png
docker cp kazdate.png framadateServ:/var/framadate/images/logo-framadate.png
docker cp kazmel.png roundcubeServ:/var/www/html/skins/elastic/images/kazmel.png
docker cp kaz-tete.png sympaServ:/usr/share/sympa/static_content/icons/logo_sympa.png
docker cp kaz-tete.png dokuwikiServ:/dokuwiki/lib/tpl/docnavwiki/images/logo.png
docker cp kaz-tete.png ldapUI:/var/www/html/images/ltb-logo.png
docker cp kaz-entier.svg webServ:/usr/share/nginx/html/images/logo.svg
docker cp kaz-signature.png webServ:/usr/share/nginx/html/m/logo.png
for cloud in nextcloudServ kaz-nextcloudServ; do
docker cp kaz-entier.svg "${cloud}":/var/www/html/themes/kaz-entier.svg
docker cp kaz-tete.svg "${cloud}":/var/www/html/themes/kaz-tete.svg
docker exec -ti -u 33 "${cloud}" /var/www/html/occ theming:config logo /var/www/html/themes/kaz-tete.svg # tete
docker exec -ti -u 33 "${cloud}" /var/www/html/occ theming:config logoheader /var/www/html/themes/kaz-entier.svg # entier
# non #exec -ti -u 33 "${cloud}" /var/www/html/occ theming:config favicon /var/www/html/themes/kaz-patte.svg # patte
done

13
bin/upgradeDockerCompose.sh Executable file
View File

@ -0,0 +1,13 @@
#!/bin/bash
OLDVERSION=$(docker-compose -v | sed -En 's/.*version ([a-z0-9\.]*).*/\1/p')
DOCKERCOMPOSE_VERSION="v2.17.3"
if [ "$OLDVERSION" = "$DOCKERCOMPOSE_VERSION" ]
then
echo -e "Docker Compose déjà en version $DOCKERCOMPOSE_VERSION"
exit
fi
curl -SL https://github.com/docker/compose/releases/download/$DOCKERCOMPOSE_VERSION/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

72
bin/verifExistenceMails.sh Executable file
View File

@ -0,0 +1,72 @@
#!/bin/bash
#Koi: on vérifie que chaque email possède son répertoire et vice et versa (on supprime sinon)
#Kan: 20/06/2022
#Ki: fab
#on récupère toutes les variables et mdp
# on prend comme source des repertoire le dossier du dessus ( /kaz dans notre cas )
KAZ_ROOT=$(cd "$(dirname $0)"/..; pwd)
. "${KAZ_ROOT}/bin/.commonFunctions.sh"
setKazVars
cd $(dirname $0)/..
. "${DOCKERS_ENV}"
. "${KAZ_KEY_DIR}/SetAllPass.sh"
DOCK_DIR=$KAZ_COMP_DIR
SETUP_MAIL="docker exec -ti mailServ setup"
#on détermine le script appelant, le fichier log et le fichier source, tous issus de la même racine
PRG=$(basename $0)
RACINE=${PRG%.sh}
# emails et les alias KAZ déjà créés
TFILE_EMAIL=$(mktemp /tmp/test_email.XXXXXXXXX.TFILE_EMAIL)
#on stocke les emails et alias déjà créés
(
${SETUP_MAIL} email list | cut -d ' ' -f 2 | grep @
${SETUP_MAIL} alias list | cut -d ' ' -f 2 | grep @
) > ${TFILE_EMAIL}
#did on supprime le ^M en fin de fichier pour pas faire planter les grep
sed -i -e 's/\r//g' ${TFILE_EMAIL}
rep_email="/var/lib/docker/volumes/postfix_mailData/_data"
#étape n°1: pour chaque répertoire, on vérifie que l'email existe
echo "Début Etape n°1: on liste les répertoires des emails et on vérifie que les emails correspondant existent"
ls -Ifilter -Itmp ${rep_email} | while read fin_email; do
ls ${rep_email}/${fin_email} | while read debut_email; do
email=`echo ${debut_email}@${fin_email}`
#est-ce que l'email existe ?
nb_ligne=$(grep "^${email}$" ${TFILE_EMAIL} | wc -l)
if [ ${nb_ligne} -gt 0 ];then
false
else
#suppression du répertoire
echo rm ${rep_email}/${fin_email}/${debut_email} -rf
fi
done
#si le répertoire domaine est vide, on le supprime
find ${rep_email}/${fin_email} -maxdepth 0 -type d -empty -delete
done
echo "aucune commande n'a été lancée, possible de le faire à la main"
echo "Fin Etape n°1"
#Etape n°2: pour chaque email, on vérifie que le répertoire existe
echo "Début Etape n°2 n°2: on liste les emails et on vérifie que les répertoires correspondant existent"
cat ${TFILE_EMAIL} | while read email; do
debut_email=$(echo ${email} | awk -F '@' '{print $1}')
fin_email=$(echo ${email} | awk -F '@' '{print $2}')
if [ -d ${rep_email}/${fin_email}/${debut_email} ];then
true
else
echo "Attention, le répertoire ${fin_email}/${debut_email} n'existe pas alors que l'email existe!"
fi
done
echo "Fin Etape n°2"

11
bin/vide_poubelle Executable file
View File

@ -0,0 +1,11 @@
#!/bin/sh
cd "${HOME}/tmp/POUBELLE" >/dev/null 2>&1
if test "$?" -eq 0
then
rm -f * .* 2>/dev/null
else
echo "$0 pas de poubelle a vider !"
fi

View File

@ -0,0 +1,4 @@
# e-mail server composer
ldap
postfix
sympa

View File

@ -0,0 +1 @@
# orga composer

View File

@ -0,0 +1,2 @@
proxy
#traefik

View File

@ -0,0 +1,12 @@
cloud
dokuwiki
#framadate
paheko
gitea
jirafeau
mattermost
roundcube
mobilizon
vaultwarden
ldap
apikaz

View File

@ -0,0 +1,6 @@
jirafeau
ethercalc
collabora
etherpad
web
imapsync

153
config/dockers.tmpl.env Normal file
View File

@ -0,0 +1,153 @@
# Les variables d'environnements utilisées
# par les dockers via le lien :
# .env -> ../../config/dockers.env
#######################################
# prod / dev / local
mode=
########################################
# choix du domaine
# prod=kaz.bzh
domain=
########################################
# choix du domaine des mails sympa
# prod=listes.kaz.bzh
domain_sympa=
########################################
# Pour paheko qui met en "dur" dans
# sa config l'URL pour l'atteindre
# prod=https
httpProto=
# prod=89.234.186.111
MAIN_IP=
# prod=89.234.186.151
SYMPA_IP=
# prod1=prod1
site=prod1
########################################
# choix du domaine ldap
# prod dc=kaz,dc=bzh
ldap_root=
########################################
# devrait être dans env-jirafeauServ
# mais seuls les variables de ".env" sont
# utilisables pour le montage des volumes
jirafeauDir=
# idem, devrait être dans le env-castopodServ mais c'est utilisé directement dans le docker-compose.yml
castopodRedisPassword=
########################################
# politique de redémarrage
# prod=always
restartPolicy=
########################################
# sites multiples
# prod=prod1
site=
########################################
# URL de l'API ACME pour les certifs
# prod=https://acme-v02.api.letsencrypt.org/directory
acme_server=
########################################
# noms des services
# ou www (mais bof)
webHost=
calcHost=tableur
cloudHost=cloud
dateHost=sondage
dokuwikiHost=wiki
fileHost=depot
pahekoHost=paheko
gitHost=git
gravHost=grav
matterHost=agora
officeHost=office
padHost=pad
smtpHost=smtp
ldapHost=ldap
ldapUIHost=mdp
sympaHost=listes
vigiloHost=vigilo
webmailHost=webmail
wordpressHost=wp
mobilizonHost=mobilizon
vaultwardenHost=koffre
traefikHost=dashboard
imapsyncHost=imapsync
castopodHost=pod
apikazHost=apikaz
########################################
# ports internes
matterPort=8000
imapsyncPort=8080
apikaz=5000
########################################
# noms des containers
dokuwikiServName=dokuwikiServ
ethercalcServName=ethercalcServ
etherpadServName=etherpadServ
framadateServName=framadateServ
pahekoServName=pahekoServ
gitServName=gitServ
gravServName=gravServ
jirafeauServName=jirafeauServ
mattermostServName=mattermostServ
nextcloudServName=nextcloudServ
officeServName=officeServ
proxyServName=proxyServ
roundcubeServName=roundcubeServ
smtpServName=mailServ
ldapServName=ldapServ
sympaServName=sympaServ
vigiloServName=vigiloServ
webServName=webServ
wordpressServName=wpServ
mobilizonServName=mobilizonServ
vaultwardenServName=vaultwardenServ
traefikServName=traefikServ
prometheusServName=prometheusServ
grafanaServName=grafanaServ
ethercalcDBName=ethercalcDB
etherpadDBName=etherpadDB
framadateDBName=framadateDB
gitDBName=gitDB
mattermostDBName=mattermostDB
nextcloudDBName=nextcloudDB
roundcubeDBName=roundcubeDB
sympaDBName=sympaDB
vigiloDBName=vigiloDB
wordpressDBName=wpDB
mobilizonDBName=mobilizonDB
vaultwardenDBName=vaultwardenDB
ldapUIName=ldapUI
imapsyncServName=imapsyncServ
castopodDBName=castopodDB
castopodServName=castopodServ
apikazServName=apikazServ
########################################
# services activés par container.sh
# variables d'environneements utilisées
# pour le tmpl du mandataire (proxy)

Some files were not shown because too many files have changed in this diff Show More