Skip to content
Snippets Groups Projects
Commit 06dce56c authored by Michael Davis's avatar Michael Davis
Browse files

Updates docs with how to set up Ceph ObjectStore

parent 60761cdc
No related branches found
No related tags found
No related merge requests found
No preview for this file type
No preview for this file type
No preview for this file type
No preview for this file type
......@@ -165,9 +165,12 @@ Install the CTA RPMs:
\section{Configure the ObjectStore}
\label{install_cta_configure_objectstore}
There are two types of ObjectStore: Virtual File System (VFS) or \href{http://docs.ceph.com/docs/master/}{Ceph}
distributed storage system.
\subsection{Configuration for a test system}
In a test system, configure the ObjectStore as a Virtual File System (VFS). Initialise the new ObjectStore VFS and set
global \texttt{rwx} permissions:
In a test system, configure the ObjectStore as a VFS. Initialise the new ObjectStore VFS and set global \texttt{rwx}
permissions:
\begin{lstlisting}
# export OBJECTSTORETYPE=file
# export OBJECTSTOREURL=$(cta-objectstore-initialize | sed 's/^.*file:\/\///')
......@@ -182,39 +185,44 @@ drwxrwxrwx. 2 root root 240 May 9 11:22 /tmp/jobStoreVFSg4Mqz4/
%\end{lstlisting}
\subsection{Configuration for a production system}
\begin{alertbox}
\begin{itemize}
\item Ceph---how much space?
\end{itemize}
\end{alertbox}
In a production system, configure the ObjectStore to use Ceph. To install Ceph:
\begin{lstlisting}
# yum-config-manager --enable ceph
# yum -y install ceph-common
\end{lstlisting}
Create \texttt{/etc/ceph/ceph.conf} with the following contents:
Specify the location of the Ceph cluster monitor daemon in \texttt{/etc/ceph/ceph.conf}. To use the CERN Ceph cluster
monitor:
\begin{lstlisting}
[global]
mon host = cephmond.cern.ch:6790
\end{lstlisting}
Create \texttt{/etc/ceph/ceph.client.cta-id.keyring} with the following contents:
Specify the location of the new ObjectStore:
\begin{lstlisting}
export OBJECTSTORETYPE=ceph
export OBJECTSTOREID=cta-id
export OBJECTSTOREPOOL=cta-tapepool
export OBJECTSTORENAMESPACE=cta-ns
export OBJECTSTOREURL= rados://${OBJECTSTOREID}@${OBJECTSTOREPOOL}:${OBJECTSTORENAMESPACE}
\end{lstlisting}
Create the Ceph keyring for this CTA instance (\texttt{/etc/ceph/ceph.client.cta-id.keyring}):
\begin{lstlisting}
[client.cta-id]
key = KEY
key = CEPH_OBJECTSTORE_SECRET_KEY
caps mon = "allow r"
caps osd = "allow rwx pool=cta-tapepool namespace=cta-ns"
\end{lstlisting}
Create environment variables to store the type and location of the new ObjectStore:
\begin{alertbox}
There is also a configuration file \texttt{/etc/ceph/rbdmap} but this appears to contain only comments:
\begin{lstlisting}
export OBJECTSTORETYPE=ceph
export OBJECTSTOREURL=rados://cta-id@cta-tapepool:cta-ns
export OBJECTSTORENAMESPACE=cta-ns
export OBJECTSTOREID=cta-id
export OBJECTSTOREPOOL=cta-tapepool
# RbdDevice Parameters
#poolname/imagename id=client,keyring=/etc/ceph/ceph.client.keyring
\end{lstlisting}
Initialise the ObjectStore:
\end{alertbox}
\noindent Initialise the ObjectStore:
\begin{lstlisting}
# ~/CTA_build/objectstore/cta-objectstore-initialize $OBJECTSTOREURL
# cta-objectstore-initialize $OBJECTSTOREURL
\end{lstlisting}
List the ObjectStore content:
\begin{lstlisting}
......@@ -228,67 +236,6 @@ Delete the ObjectStore content:
$OBJECTSTORENAMESPACE rm toto
\end{lstlisting}
Julien/Eric's script to create a Ceph ObjectStore:
\begin{lstlisting}
#!/bin/bash
OBJECTSTORE_CONFIG_DIR=/etc/config/objectstore
function get_conf {
test -r ${OBJECTSTORE_CONFIG_DIR}/$1 && cat ${OBJECTSTORE_CONFIG_DIR}/$1 || echo -n UNDEF
}
OBJECTSTORETYPE=UNDEF
OBJECTSTOREURL=UNDEF
rm -f /tmp/objectstore-rc.sh
case "$(get_conf objectstore.type)" in
"UNDEF")
echo "objectstore configmap is not defined"
ls ${OBJECTSTORE_CONFIG_DIR}
exit 1
;;
"ceph")
echo "Configuring ceph objectstore"
cat <<EOF >/etc/ceph/ceph.conf
[global]
mon host = $(get_conf objectstore.ceph.mon):$(get_conf objectstore.ceph.monport)
EOF
cat <<EOF >/etc/ceph/ceph.client.$(get_conf objectstore.ceph.id).keyring
[client.$(get_conf objectstore.ceph.id)]
key = $(get_conf objectstore.ceph.key)
caps mon = "allow r"
caps osd = "allow rwx pool=$(get_conf objectstore.ceph.pool) namespace=$(get_conf objectstore.ceph.namespace)"
EOF
OBJECTSTORETYPE=ceph
OBJECTSTOREURL="rados://$(get_conf objectstore.ceph.id)@$(get_conf objectstore.ceph.pool):$(get_conf objectstore.ceph.namespace)"
echo "export OBJECTSTORENAMESPACE=$(get_conf objectstore.ceph.namespace)" >> /tmp/objectstore-rc.sh
echo "export OBJECTSTOREID=$(get_conf objectstore.ceph.id)" >> /tmp/objectstore-rc.sh
echo "export OBJECTSTOREPOOL=$(get_conf objectstore.ceph.pool)" >> /tmp/objectstore-rc.sh
;;
"file")
echo "Configuring file objectstore"
OBJECTSTORETYPE=file
OBJECTSTOREURL=$(echo $(get_conf objectstore.file.path) | sed -e "s#%NAMESPACE#${MY_NAMESPACE}#")
;;
*)
echo "Error unknown objectstore type: $(get_conf objectstore.type)"
exit 1
;;
esac
cat <<EOF >>/tmp/objectstore-rc.sh
export OBJECTSTORETYPE=${OBJECTSTORETYPE}
export OBJECTSTOREURL=${OBJECTSTOREURL}
EOF
exit 0
\end{lstlisting}
\section{Configure the Catalogue}
\label{install_cta_configure_catalogue}
......@@ -342,26 +289,54 @@ Create the CTA user:
\end{lstlisting}
\subsection{Configure CTA Front End Authentication}
CTA will receive archive and retrieve requests from multiple EOS instances, one per User (Atlas, CMS, etc.) There will
be a Simple Shared Secret (SSS) key for each EOS instance\footnote{In principle, each instance can have a unique key. In
practice, it may be that all instances use the same key.}
The EOS instance name is used as the user name for the SSS key. In the case of a local EOS instance (See
Appendix~\ref{install_eos}), this can be found in the \texttt{eos\_env} configuration file:
\begin{lstlisting}
# grep EOS_INSTANCE_NAME /etc/sysconfig/eos_env
EOS_INSTANCE_NAME=eosdev
\end{lstlisting}
Use the instance name to create a SSS key for communication between the CTA and the EOS instance and add it to the
keytab for the CTA Front End:
\begin{lstlisting}
# cd /etc
# xrdsssadmin -k cta_eosdev -u eosdev -g cta add ctafrontend_server_sss.keytab
xrdsssadmin: Keyfile 'ctafrontend_server_sss.keytab' does not exist. Create it? (y | n): y
xrdsssadmin: 1 key out of 1 kept (0 expired).
# chown cta ctafrontend_server_sss.keytab
# chmod 600 ctafrontend_server_sss.keytab
\end{lstlisting}
\begin{alertbox}
Then there is the voodoo dance with SSS, which should be explained.
The \texttt{-k} option specifies the key name that the \texttt{-u} (user) and \texttt{-g} (group) options will be
applied to, overriding the default \texttt{nobody/nogroup}.
If the keyname ends with \texttt{+}, SSS tokens may be forwarded when encrypted by the associated key, i.e. the key can
be used by a different host from the one that encrypted the SSS token. This is required in certain situations, for
example when tunnelling through a NAT device and---notably---when creating keys for use in the Kubernetes environment.
In production, the \texttt{+} should be omitted, as forwardable keys are inherently less secure. Allowing forwarded
tokens makes it impossible to detect man-in-the-middle attacks or stolen SSS tokens.
\end{alertbox}
The SSS key that we have just created also needs to be available to the EOS client, so we can copy the server keytab
into the client keytab:
\begin{lstlisting}
# cp ctafrontend_server_sss.keytab ctafrontend_client_sss.keytab
# chmod 600 ctafrontend_client_sss.keytab
# chown cta ctafrontend_client_sss.keytab
\end{lstlisting}
\begin{alertbox}
Unify/Agree on the name of XROOTD configuration files:
Either call them /etc/xrd.cf.* or /etc/xrootd/<function>.cfg
(I prefer /etc/xrootd structure even if different from IT-DSS-FDO).
In principle there can be one key per EOS instance, so it is possible that \texttt{ctafrontend\_client\_sss.keytab}
contains more than one key\footnote{In practice, it may be that all EOS instances share the same key.}. However, the
XRoot client always uses the last key added the keytab, so the entire keytab can be safely copied to the client so
long as the correct key for that instance is the last one that was added.
\end{alertbox}
\begin{lstlisting}
# EOS INSTANCE NAME used as username for SSS key
EOSINSTANCE=ctaeos
# Create SSS key for ctafrontend, must be forwardable in kubernetes realm
echo y | xrdsssadmin -k ctafrontend+ -u ${EOSINSTANCE} -g cta add /etc/ctafrontend_SSS_s.keytab
# copy it in the client file that contains only one SSS
cp /etc/ctafrontend_SSS_s.keytab /etc/ctafrontend_SSS_c.keytab
chmod 600 /etc/ctafrontend_SSS_s.keytab /etc/ctafrontend_SSS_c.keytab
chown cta /etc/ctafrontend_SSS_s.keytab /etc/ctafrontend_SSS_c.keytab
\begin{lstlisting}
sed -i 's|.*sec.protocol sss.*|sec.protocol sss -s /etc/ctafrontend_SSS_s.keytab -c /etc/ctafrontend_SSS_c.keytab|' /etc/xrootd/xrootd-cta.cfg
sed -i 's|.*sec.protocol unix.*|#sec.protocol unix|' /etc/xrootd/xrootd-cta.cfg
......@@ -377,6 +352,17 @@ for ((;;)); do test -e /etc/cta-frontend.keytab && break; sleep 1; echo -n .; do
echo OK
\end{lstlisting}
\begin{alertbox}
Then there is the voodoo dance with SSS, which should be explained.
\end{alertbox}
\begin{alertbox}
Unify/Agree on the name of XROOTD configuration files:
Either call them /etc/xrd.cf.* or /etc/xrootd/<function>.cfg
(I prefer /etc/xrootd structure even if different from IT-DSS-FDO).
\end{alertbox}
\subsection{Start CTA Front End}
\begin{lstlisting}
......
No preview for this file type
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment