(Updated March 4, 2020)
NOTE: This document supersedes all previous CAS documentation prior to January 2020.
We loosely follow the New School model of deploying Apereo CAS. Where appropriate we eliminate or modify the process to suit the needs of HVCC.
We are implementing a good portion of the complexity of New School and we are implementing the best practices of hardening, configuring, version control (git), persistent tickets (MongoDB) and high availability (Nginx with MongoDB replication).
Finally, in previous implementations, we relied on installing the products from Apache and Oracle. Instead this document strives to use Ubuntu packaging wherever possible. This reduces the overall complexity and maintenance of the installation and provides a support mechanism using Canonical support contracts.
Server and Tomcat Setup
We begin with a basic Ubuntu 18.04 server. Use the 18.04 template and then tailor it using the deployment documentation.
The server should fully patched before proceeding.
Time
This system will have its time synchronized with NTP.
root@casdev-master:~# timedatectl Local time: Tue 2019-03-05 13:02:54 EST Universal time: Tue 2019-03-05 18:02:54 UTC RTC time: Tue 2019-03-05 18:02:54 Time zone: America/New_York (EST, -0500) System clock synchronized: yes systemd-timesyncd.service active: yes RTC in local TZ: no
If the systemd-timesyncd.service is not set to active, then run the command below and confirm.
timedatectl set-ntp true
Entropy
There is simply not enough entropy in a virtual machine. This is because on a headless machine (no monitor, no mouse, no keyboard) you only have entropy feeding /dev/random (blocking) and /dev/urandom (non-blocking) from interrupts and disk activity. This is not enough entropy when performing many encryption operations where entropy is essential.
As a result the CAS server will use the haveged
daemon to provide adequate entropy. This is installed with the following.
apt install haveged
References:
Why you need entropy.
HAVAGE project.
Haveged on GitHub.
Havaged Documentation.
Install Tomcat 9
apt install tomcat9
The OpenJDK 11 headless components are installed along with Tomcat.
Tomcat Hardening
Modify the follow in /etc/tomcat9/server.xml
. This will turn off automatic unpacking of the WAR files.
unpackWARs="true" autoDeploy="true">
[/xml] [xml highlight=”2″] <Host name="localhost" appBase="webapps"
unpackWARs="false" autoDeploy="true">
[/xml]
Configure TLS and create certificate(s)
[If this is a test system where you are only building a funtionality test and have no intetion of actually connecting authentication to a service, then follow the self-signed cert procedure in the OpenSSL documentation and update the APR connector accordingly (below).]
Generate a CSR and fetch a certificate from DigiCert. You will need these files for the APR based configuration.
Be sure to stage the intermediate cert with the rest of them.
root@casdev-master:~/certs/casdev# ls -al total 40 drwxr-xr-x 2 root root 4096 Sep 25 10:08 . drwxr-xr-x 4 root root 4096 Sep 24 13:33 .. -rw-r--r-- 1 root root 2558 Sep 24 13:29 casdev-master_hvcc_edu.crt -rw-r--r-- 1 root root 1050 Sep 24 13:25 casdev-master_hvcc_edu.csr -rw------- 1 root root 1704 Sep 24 13:25 casdev-master_hvcc_edu.key -rw-r--r-- 1 root root 1689 Sep 24 13:29 DigiCertCA.crt
Copy all of the files, minus the CSR to /var/lib/tomcat9 and change the ownership:
root@casdev-master:~/certs/casdev# chown tomcat * root@casdev-master:~/certs/casdev# cp -p casdev-master_hvcc_edu.crt /var/lib/tomcat9 root@casdev-master:~/certs/casdev# cp -p casdev-master_hvcc_edu.key /var/lib/tomcat9 root@casdev-master:~/certs/casdev# cp -p DigiCertCA.crt /var/lib/tomcat9
Modify the /etc/tomcat9/server.xml
File
You may need to change the contents of the server.xml file to disable the SHUTDOWN port:
Change
[xml] <Server port="8005" shutdown="SHUTDOWN">[/xml]
to
[xml] <Server port="-1" shutdown="SHUTDOWN">[/xml]
Comment out the HTTP port:
[xml highlight=”8-12″] <!– A "Connector" represents an endpoint by which requests are receivedand responses are returned. Documentation at :
Java HTTP Connector: /docs/config/http.html
Java AJP Connector: /docs/config/ajp.html
APR (HTTP/AJP) Connector: /docs/apr.html
Define a non-SSL/TLS HTTP/1.1 Connector on port 8080
–>
<!–
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
–>
<!– A "Connector" using the shared thread pool–>
[/xml]
The most important portion of /etc/tomcat9/server.xml
is APR connector will need to be configured like so:
maxThreads="150" SSLEnabled="true" >
<UpgradeProtocol className="org.apache.coyote.http2.Http2Protocol" />
<SSLHostConfig
honorCipherOrder="false" protocols="+TLSv1.3,+TLSv1.2"
ciphers="TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256"
disableSessionTickets="true">
<Certificate certificateKeyFile="/var/lib/tomcat9/casdev.key"
certificateFile="/var/lib/tomcat9/casdev.crt"
certificateChainFile="/var/lib/tomcat9/DigiCertCA.crt"
type="RSA" />
</SSLHostConfig>
</Connector>
[/xml]
If there are any NIO connectors enabled, simply comment them out.
Finally, if you want port 443 for the APR connector, you have to add the following to the bottom of /etc/default/tomcat9
AUTHBIND=yes
Either way, make sure you modify the /etc/default/tomcat9
and change:
JAVA_OPTS="-Djava.awt.headless=true -XX:+UseG1GC"
to the following for development and production
JAVA_OPTS="-Djava.awt.headless=true -Xms512M -Xmx2048M -XX:+UseParallelGC"
This sets the maximum memory to 2GB and uses the Parallel Garbage Collector which is faster than the default G1GC.
Tomcat Tuning
Configuring Async
Modify the file /etc/tomcat9/web.xml
.
<servlet-name>default</servlet-name>
<servlet-class>org.apache.catalina.servlets.DefaultServlet</servlet-class>
<init-param>
<param-name>debug</param-name>
<param-value>0</param-value>
</init-param>
<init-param>
<param-name>listings</param-name>
<param-value>false</param-value>
</init-param>
<load-on-startup>1</load-on-startup>
<async-supported>true</async-supported>
</servlet>
[/xml]
A little further on down in the file…
[xml highlight=”13″] <servlet><servlet-name>jsp</servlet-name>
<servlet-class>org.apache.jasper.servlet.JspServlet</servlet-class>
<init-param>
<param-name>fork</param-name>
<param-value>false</param-value>
</init-param>
<init-param>
<param-name>xpoweredBy</param-name>
<param-value>false</param-value>
</init-param>
<load-on-startup>3</load-on-startup>
<async-supported>true</async-supported>
</servlet>
[/xml]
Resource Caching
Modify the file /etc/tomcat9/context.xml
.
<!– Default set of monitored resources. If one of these changes, the –>
<!– web application will be reloaded. –>
<WatchedResource>WEB-INF/web.xml</WatchedResource>
<WatchedResource>${catalina.base}/conf/web.xml</WatchedResource>
<!– Uncomment this to disable session persistence across Tomcat restarts –>
<!–
<Manager pathname="" />
–>
<!– Enable caching, increase the cache size (10240 default), increase –>
<!– the ttl (5s default) –>
<Resources cachingAllowed="true" cacheMaxSize="40960" cacheTtl="60000" />
</Context>
[/xml]
Setup the CAS Development Area
Git Preparations
Build a work area. Use versions and other direcory tags to separate parallel builds.
root@casdev-master:~# mkdir /opt/workspace/6.1 root@casdev-master:~# cd /opt/workspace/6.1
Clone the CAS Overlay Template:
root@casdev-master:/opt/workspace/6.1# git clone https://github.com/apereo/cas-overlay-template.git Cloning into 'cas-overlay-template'... remote: Enumerating objects: 17, done. remote: Counting objects: 100% (17/17), done. remote: Compressing objects: 100% (14/14), done. remote: Total 1551 (delta 4), reused 9 (delta 3), pack-reused 1534 Receiving objects: 100% (1551/1551), 10.49 MiB | 43.32 MiB/s, done. Resolving deltas: 100% (838/838), done.
Once cloned we will get into the 6.1 branch
root@casdev-master:/opt/workspace/6.1# cd cas-overlay-template/ root@casdev-master:/opt/workspace/6.1/cas-overlay-template# git branch -a * master remotes/origin/4.1 remotes/origin/4.2 remotes/origin/5.0.x remotes/origin/5.1 remotes/origin/5.2 remotes/origin/5.3 remotes/origin/6.0 remotes/origin/6.1 remotes/origin/HEAD -> origin/master remotes/origin/alljarsinwar remotes/origin/master root@casdev-master:/opt/workspace/6.1/cas-overlay-template# git checkout 6.1 Branch '6.1' set up to track remote branch '6.1' from 'origin'. Switched to a new branch '6.1'
Confirm your CAS version in the gradle.properties
file. You should not need to modify this file unless you are changing the CAS version within the same tree.
root@casdev-master:/opt/workspace/6.1/cas-overlay-template# grep cas.version gradle.properties cas.version=6.1.5
Test the Build
[NOTE: The first time you run the Gradle wrapper script, it will install the Gradle package. You do not need to install anything else.]
root@casdev-master:/opt/workspace/6.1/cas-overlay-template# ./gradlew clean build Starting a Gradle Daemon (subsequent builds will be faster) Deprecated Gradle features were used in this build, making it incompatible with Gradle 6.0. Use '--warning-mode all' to show the individual deprecation warnings. See https://docs.gradle.org/5.6.3/userguide/command_line_interface.html#sec:command_line_warnings BUILD SUCCESSFUL in 13s 3 actionable tasks: 3 executed
Adding Dependencies
Edit the build.gradle
file and add the first dependency by changing:
// Other CAS dependencies/modules may be listed here…
// compile "org.apereo.cas:cas-server-support-json-service-registry:${casServerVersion}"
}
[/xml]
to:
[xml] dependencies {// Other CAS dependencies/modules may be listed here…
// compile "org.apereo.cas:cas-server-support-json-service-registry:${casServerVersion}"
compile "org.apereo.cas:cas-server-webapp:${project.’cas.version’}"
}
[/xml]
This tells Gradle that we wish to build a package for an external environment like Tomcat (which is what we are doing). You can then test the build to make sure all is well.
root@casdev-master:/opt/workspace/6.1/cas-overlay-template# ./gradlew clean build Starting a Gradle Daemon, 1 busy Daemon could not be reused, use --status for details Deprecated Gradle features were used in this build, making it incompatible with Gradle 6.0. Use '--warning-mode all' to show the individual deprecation warnings. See https://docs.gradle.org/5.6.3/userguide/command_line_interface.html#sec:command_line_warnings BUILD SUCCESSFUL in 23s 3 actionable tasks: 3 executed
CAS Configuration
The configuration is located in /etc/cas/config
as expected by CAS. The baseline for the config is located in the /opt/workspace/cas-overlay-template/etc/cas/config
directory.
You may need to run the following on each server when the configs are replicated:
mkdir -p /etc/cas/config
The cas.properties
File
The following is suitable for TESTING your CAS installation. The keys would need to be changed (see below) and likely many other configuration sections added for various features.
[bash] cas.server.name: https://casdev.hvcc.edu:8443cas.server.prefix: https://casdev.hvcc.edu:8443/cas
logging.config: file:/etc/cas/config/log4j2.xml
cas.tgc.secure: true
cas.tgc.crypto.signing.key: WH6zsxQdf1N3DaWIH3O1-y728s8Wfk3p9EGlrZ_WiDZXB8hu2aEu4znbHzyYw0O97OMpVgHveYjWQ2OVErOqew
cas.tgc.crypto.encryption.key: giUmJMdZVeN-fcwZIeHHqpSjyHOhYsT79S8G-yjosKc
cas.webflow.crypto.signing.key: JnQmCJoz2iNZmEeiKxvl4g1z7GNxpG-OpCIFQpYA7hFLSMS6fw7ZgPWDlcOs3R-ejkz-22DIt4TOkYR_QAamWw
cas.webflow.crypto.encryption.key: o/peRCgQOP3ACvFgiTOmzw==
# This is used to disable the embedded Tomcat autoconfigure on versions <9.0.23
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.web.embedded.EmbeddedWebServerFactoryCustomizerAutoConfiguration
[/bash]
The keys in the config file are generated at https://mkjwk.org/ on the Shared Secret tab.
The signing keys are HS256 and 512 bits.
The tgc crypto key is HS256 and 256 bits.
Examples are shown below. Generate new keys when moving from development to production.
root@casdev-master:~# java -jar jwk-gen.jar -t oct -a HS256 -s 512 Full key: { "kty": "oct", "kid": "1575570368", "k": "c1AQizlTvQndxf8UljmEwsZdXr8XwuO712OgvOWtk5Qu_jY49tgAePXzDgVVceyH3kz3_4jdAFhz9TJtw0LEYQ", "alg": "HS256" } root@casdev-master:~# java -jar jwk-gen.jar -t oct -a HS256 -s 256 Full key: { "kty": "oct", "kid": "1575570419", "k": "etGN3lYHlYoQR-mbB_9enxAlL_OP-xF0mb3n8dAtU2k", "alg": "HS256" }
The webflow crypto key is 16 octets and can generated with:
root@casdev-master:~# openssl rand -base64 16 UdhGDjoHHndpsaaHnLUDCA== root@casdev-master:~#
The log4j2.xml
file
Modify the logging configuration from:
[xml] <Property name="baseDir" >/var/log</Property>[/xml]
to
[xml] <Property name="baseDir" >/var/log/tomcat9/cas</Property>[/xml]
This is subordinate to the tomcat9
dir due to additional sandboxing that has been applied to Tomcat for Debian/Ubuntu. Now you need to run:
mkdir /var/log/tomcat9/cas chown tomcat.tomcat /var/log/tomcat9/cas chmod 750 /var/log/tomcat9/cas
For easier log handling, consider changing the RollingFile policies from
[xml] <RollingFile name="file" fileName="${baseDir}/cas.log" append="true"filePattern="${baseDir}/cas-%d{yyyy-MM-dd-HH}-%i.log">
<PatternLayout pattern="%d %p [%c] – <%m>%n"/>
<Policies>
<OnStartupTriggeringPolicy />
<SizeBasedTriggeringPolicy size="10 MB"/>
<TimeBasedTriggeringPolicy />
</Policies>
</RollingFile>
<RollingFile name="auditlogfile" fileName="${baseDir}/cas_audit.log" append="true"
filePattern="${baseDir}/cas_audit-%d{yyyy-MM-dd-HH}-%i.log">
<PatternLayout pattern="%d %p [%c] – %m%n"/>
<Policies>
<OnStartupTriggeringPolicy />
<SizeBasedTriggeringPolicy size="10 MB"/>
<TimeBasedTriggeringPolicy />
</Policies>
</RollingFile>
[/xml]
to
[xml highlight=”2,5,9,12″] <RollingFile name="file" fileName="${baseDir}/cas.log" append="true"filePattern="${baseDir}/cas-%d{yyyy-MM-dd}.log">
<PatternLayout pattern="%d %p [%c] – <%m>%n"/>
<Policies>
<TimeBasedTriggeringPolicy interval="1" modulate="true"/>
</Policies>
</RollingFile>
<RollingFile name="auditlogfile" fileName="${baseDir}/cas_audit.log" append="true"
filePattern="${baseDir}/cas_audit-%d{yyyy-MM-dd}.log">
<PatternLayout pattern="%d %p [%c] – %m%n"/>
<Policies>
<TimeBasedTriggeringPolicy interval="1" modulate="true"/>
</Policies>
</RollingFile>
[/xml]
Packaging and Installation
Below are a couple of scripts that are based on Dave Curry’s work. In order for them to be effect you need to perform a few steps prior to packaging and installation. The first has to do with making sure your overlay is up to date:
./gradlew clean build
Then you have to extract the WAR back into its original form so the packager can grab the components from the tree. This is due to the fact that we do not allow auto expanding of WAR files in Tomcat (this was a hardening measure).
./gradlew explodeWarOnly
Now you are ready to run the packager. The script for packaging is shown below.
[bash] #!/bin/sh# Be sure to update this path to the location of your actual CAS Overlay
overlayDir=/path/to/nowhere
group=tomcat
[ -d "$overlayDir" ] || {echo "$overlayDir does not exist."
exit 1
}
[ -d "$overlayDir/build/cas" ] || {
echo "WAR was not exploded."
exit 2
}
echo "Creating package…"
cd $overlayDir
tar czf /tmp/cassrv-files.tgz –owner=root –group=$group –mode=g-w,o-rwx \
etc/cas -C build cas –exclude cas/META-INF
echo ""
ls -asl /tmp/cassrv-files.tgz
exit 0
[/bash]
The next script handles the installation.
[bash] #!/bin/sh# Be sure to update this path to the location of your actual Tomcat path
tomcatDir=/path/to/tomcat9
echo "$tomcatDir does not exist."
exit 1
}
[ -d "$tomcatDir/webapps" ] || {
echo "There is no place to put the CAS files."
exit 2
}
echo "— Installing on `hostname`"
umask 027
if [ -f /tmp/cassrv-files.tgz ]
then
service tomcat9 stop
# Comment out these three lines if you do NOT want your configs replaced!
cd /
rm -rf etc/cas/config etc/cas/services
tar xzf /tmp/cassrv-files.tgz etc/cas
cd $tomcatDir
rm -rf webapps/cas work/Catalina/localhost/cas
cd $tomcatDir/webapps
tar xzf /tmp/cassrv-files.tgz cas
service tomcat9 start
rm -f /tmp/cassrv-files.tgz /tmp/cassrv-install.sh
echo "Installation complete."
else
echo "Cannot find /tmp/cassrv-files.tgz; nothing installed."
exit 3
fi
exit 0
[/bash]
Adding More Features
Update the build.gradle
file to add more dependencies.
// Other CAS dependencies/modules may be listed here…
// compile "org.apereo.cas:cas-server-support-json-service-registry:${casServerVersion}"
compile "org.apereo.cas:cas-server-webapp:${project.’cas.version’}"
compile "org.apereo.cas:cas-server-support-ldap:${project.’cas.version’}"
compile "org.apereo.cas:cas-server-support-saml:${project.’cas.version’}"
compile "org.apereo.cas:cas-server-support-jdbc-drivers:${project.’cas.version’}"
compile "org.apereo.cas:cas-server-support-pm-webflow:${project.’cas.version’}"
compile "org.apereo.cas:cas-server-support-pm-jdbc:${project.’cas.version’}"
compile "org.apereo.cas:cas-server-support-json-service-registry:${project.’cas.version’}"
compile "org.apereo.cas:cas-server-support-reports:${project.’cas.version’}"
compile "org.apereo.cas:cas-server-support-duo:${project.’cas.version’}"
}
[/xml]
Feature/Service Name | Sample cas.properties |
---|---|
Provide LDAP supportorg.apereo.cas:cas-server-support-ldap |
[bash]
cas.authn.ldap[0].order=0 cas.authn.ldap[0].type=AUTHENTICATED cas.authn.ldap[0].ldapUrl=ldaps://hvcc-dc02.hvcc.edu ldaps://hvcc-dc03.hvcc.edu #cas.authn.ldap[0].principalAttributeId=sAMAccountName cas.authn.ldap[0].passwordPolicy.type=AD |
Provide SAML attribute release through the /validate URL.org.apereo.cas:cas-server-support-saml |
Provides SAML attribute release through the /validate URL. |
Provide JDBC support and driversorg.apereo.cas:cas-server-support-jdbc-drivers |
Provides JDBC support and drivers |
Provides Webflow support for Password Managementorg.apereo.cas:cas-server-support-pm-webflow |
Provides Webflow support for Password Management |
Provides JDBC support for Password Managementorg.apereo.cas:cas-server-support-pm-jdbc |
Provides JDBC support for Password Management |
Provides JSON support for servicesorg.apereo.cas:cas-server-support-json-service-registry |
Provides JSON support for services. |
Provides reporting web interfaceorg.apereo.cas:cas-server-support-reports |
Provides reporting web interface. |
Provides DUO MFA supportorg.apereo.cas:cas-server-support-duo |
Provides DUO MFA support. |
[EVERYTHING BELOW THIS LINE IS UNDER CONSTRUCTION AND VALIDATION!]
Additional Configuration Items
This section describes some configuration values not based on a particular installed feature.
Web Headers
[bash] ## Turn off the defaults since we are using NGINX to set the headers!
# This WILL break stuff!
#
cas.httpWebRequest.header.xframe=false
cas.httpWebRequest.header.xss=false
cas.httpWebRequest.header.hsts=false
cas.httpWebRequest.header.xcontent=false
cas.httpWebRequest.header.cache=false
[/bash]
Email Configuration
[bash] spring.mail.host=mail.hvcc.eduspring.mail.port=25
spring.mail.testConnection=true
spring.mail.properties.mail.smtp.auth=false
[/bash]
TGT Expiration
[bash] # 8 hours – negative value = never expirescas.ticket.tgt.maxTimeToLiveInSeconds=28800
# 8 hours (Set to a negative value to never expire tickets)
cas.ticket.tgt.timeToKillInSeconds=28800
[/bash]
Theme Settings
[bash] cas.theme.paramName=themecas.theme.defaultThemeName=hvcctheme
[/bash]
THIS NEEDS TO BE COMPLETED!!
Configuring MongoDB Replication
Installation and Configuration
Install the MongoDB software on all the servers perticipating in the HA group.
apt install mongodb
By default, the log rotation setup and the permissions for the directories is correct.
Create an admin user from the MongoDB shell:
root@casdev-master:~# mongo MongoDB shell version v3.6.3 connecting to: mongodb://127.0.0.1:27017 MongoDB server version: 3.6.3 Welcome to the MongoDB shell. For interactive help, type "help". For more comprehensive documentation, see http://docs.mongodb.org/ Questions? Try the support group http://groups.google.com/group/mongodb-user Server has startup warnings: 2019-12-04T11:52:54.000-0500 I STORAGE [initandlisten] 2019-12-04T11:52:54.000-0500 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine 2019-12-04T11:52:54.000-0500 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem 2019-12-04T11:52:55.850-0500 I CONTROL [initandlisten] 2019-12-04T11:52:55.850-0500 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2019-12-04T11:52:55.850-0500 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2019-12-04T11:52:55.850-0500 I CONTROL [initandlisten] > use admin switched to db admin > db.createUser( { user: "mongoadmin", pwd: "MongoSh@r3dPassword", roles: [ { role: "root", db: "admin" } ] } ) Successfully added user: { "user" : "mongoadmin", "roles" : [ { "role" : "root", "db" : "admin" } ] } > exit bye
Generate a SCRAM-SHA1 Keyfile
This file is used for internal authentication between the replication servers.
openssl rand -base64 756 > mongod-auth.key
You can copy the file to the servers with a small script like:
root@casdev-master:~# tar -cf /tmp/kf.tar --owner=mongodb --group=mongodb --mode=400 mongod-auth.key root@casdev-master:~# for i in 01 02 > do > scp /tmp/kf.tar casdev-srv${i}:/tmp/kf.tar > ssh casdev-srv${i} "cd /var/lib/mongodb; tar -xf /tmp/kf.tar; rm /tmp/kf.tar" > done kf.tar 100% 10KB 11.9MB/s 00:00 kf.tar 100% 10KB 9.9MB/s 00:00
Modify The MongoDB Config
For some unknown reason, Ubuntu is using the 2.4 file format for 3.6.x (18.04.x). As a result any docs you read on configuring the resource pool is all wrong. So, perform the following:
service mongodb stop mv /etc/mongodb.conf /etc/mongodb.conf.orig
Now create a new /etc/mongodb.conf
with the following
# This is the original path from the 2.4 style file. storage: dbPath: /var/lib/mongodb journal: enabled: true # We have kept this the same as the 2.4 style. systemLog: destination: file path: /var/log/mongodb/mongodb.log logAppend: true # Allowing connections from outside - was 127.0.0.1 net: bindIp: 0.0.0.0 port: 27017 # This is the keyfile detail security: keyFile: /var/lib/mongodb/mongod-auth.key # Define the replication set replication: replSetName: rs0
and start the database
service mongodb start
Create the Replica Set
Before you start the replication set configuration, make sure that the MongoDB config is actually set on all the participating servers.
root@casdev-master:~# mongo -u mongoadmin -p --authenticationDatabase admin MongoDB shell version v3.6.3 Enter password: connecting to: mongodb://127.0.0.1:27017 MongoDB server version: 3.6.3 Server has startup warnings: 2019-12-04T23:09:33.704-0500 I STORAGE [initandlisten] 2019-12-04T23:09:33.704-0500 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine 2019-12-04T23:09:33.704-0500 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem > rs.initiate() { "info2" : "no configuration specified. Using a default configuration for the set", "me" : "casdev-master:27017", "ok" : 1, "operationTime" : Timestamp(1575576239, 1), "$clusterTime" : { "clusterTime" : Timestamp(1575576239, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } > rs.add("casdev-srv01.hvcc.edu") 2019-12-05T15:04:25.091-0500 E QUERY [thread1] Error: count failed: { "operationTime" : Timestamp(1575576261, 1), "ok" : 0, "errmsg" : "Cache Reader No keys found for HMAC that is valid for time: { ts: Timestamp(1575576239, 1) } with id: 0", "code" : 211, "codeName" : "KeyNotFound", "$clusterTime" : { "clusterTime" : Timestamp(1575576261, 1), "signature" : { "hash" : BinData(0,"dJXGEHhmBfed6hR9Y3qRPkf3zao="), "keyId" : NumberLong("6767048427449614337") } } } : _getErrorWithCode@src/mongo/shell/utils.js:25:13 DBQuery.prototype.count@src/mongo/shell/query.js:383:11 DBCollection.prototype.count@src/mongo/shell/collection.js:1584:12 rs.add@src/mongo/shell/utils.js:1274:1 @(shell):1:1 > bye
As you can see there was a problem with the first add whereby the config was wrong. After fixing the config:
root@casdev-master:~# mongo -u mongoadmin -p --authenticationDatabase admin MongoDB shell version v3.6.3 Enter password: connecting to: mongodb://127.0.0.1:27017 MongoDB server version: 3.6.3 Server has startup warnings: 2019-12-04T23:09:33.704-0500 I STORAGE [initandlisten] 2019-12-04T23:09:33.704-0500 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine 2019-12-04T23:09:33.704-0500 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem rs0:PRIMARY> rs.add("casdev-srv01.hvcc.edu") { "ok" : 1, "operationTime" : Timestamp(1575576463, 1), "$clusterTime" : { "clusterTime" : Timestamp(1575576463, 1), "signature" : { "hash" : BinData(0,"JtfuWjjvSCztMWVPLndliCqblko="), "keyId" : NumberLong("6767048427449614337") } } } rs0:PRIMARY> rs.add("casdev-srv02.hvcc.edu") { "ok" : 1, "operationTime" : Timestamp(1575576682, 1), "$clusterTime" : { "clusterTime" : Timestamp(1575576682, 1), "signature" : { "hash" : BinData(0,"4qAcWlm+P89w2eU0lLqjRNLsKjQ="), "keyId" : NumberLong("6767048427449614337") } } } rs0:PRIMARY>
Now here is a set of commands to check the replication set:
rs.conf() rs.status() rs.printSlaveReplicationInfo()
The casdev-master
is the primary in this instance. The rs.printSlaveRepliactionInfo()
can be useful to make sure that servers are in sync:
rs0:PRIMARY> rs.printSlaveReplicationInfo() source: casdev-srv01.hvcc.edu:27017 syncedTo: Thu Dec 05 2019 15:14:01 GMT-0500 (EST) 0 secs (0 hrs) behind the primary source: casdev-srv02.hvcc.edu:27017 syncedTo: Thu Dec 05 2019 15:14:01 GMT-0500 (EST) 0 secs (0 hrs) behind the primary rs0:PRIMARY>
Testing Replication
root@casdev-srv02:~# mongo -u mongoadmin -p --authenticationDatabase admin MongoDB shell version v3.6.3 Enter password: connecting to: mongodb://127.0.0.1:27017 MongoDB server version: 3.6.3 Server has startup warnings: 2019-12-05T15:11:17.722-0500 I STORAGE [initandlisten] 2019-12-05T15:11:17.722-0500 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine 2019-12-05T15:11:17.722-0500 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem rs0:SECONDARY> show dbs 2019-12-05T16:03:08.998-0500 E QUERY [thread1] Error: listDatabases failed:{ "operationTime" : Timestamp(1575579781, 1), "ok" : 0, "errmsg" : "not master and slaveOk=false", "code" : 13435, "codeName" : "NotMasterNoSlaveOk", "$clusterTime" : { "clusterTime" : Timestamp(1575579781, 1), "signature" : { "hash" : BinData(0,"YJi/F0jNGJJPjE3ZsEk7OewBNsw="), "keyId" : NumberLong("6767048427449614337") } } } : _getErrorWithCode@src/mongo/shell/utils.js:25:13 Mongo.prototype.getDBs@src/mongo/shell/mongo.js:65:1 shellHelper.show@src/mongo/shell/utils.js:816:19 shellHelper@src/mongo/shell/utils.js:706:15 @(shellhelp2):1:1
As a form of protection, you cannot perform a lot of operations from a secondary. However, you can
rs0:SECONDARY> db.getMongo().setSlaveOk() rs0:SECONDARY> show dbs admin 0.000GB config 0.000GB local 0.000GB rs0:SECONDARY> use admin switched to db admin rs0:SECONDARY> show collections system.keys system.users system.version rs0:SECONDARY> db.system.users.find() { "_id" : "admin.mongoadmin", "user" : "mongoadmin", "db" : "admin", "credentials" : { "SCRAM-SHA-1" : { "iterationCount" : 10000, "salt" : "tkRL4F8xqdCee27KlWBcYA==", "storedKey" : "EtjEG/l1zcSmVs384zeGNf6rFCw=", "serverKey" : "5bYNaB7MdZh0WZ2muoiv1zdVopI=" } }, "roles" : [ { "role" : "root", "db" : "admin" } ] } rs0:SECONDARY>
Now, if replication was not working:
- You would not successfully log into the secondary using the admin database.
- You would not see the admin database that was created on the primary.
Wrapping Up Replication
Once the replication set is active, the better way to connect to the database is through the replication set.
mongo -u mongoadmin -p --authenticationDatabase admin --host rs0/casdev-master.hvcc.edu,casdev-srv01.hvcc.edu,casdev-srv02.hvcc.edu
This method will make sure that you connect to the primary regardless of which server that happens to be.
Now, we will create the database for CAS and a user account.
root@casdev-master:~# mongo -u mongoadmin -p --authenticationDatabase admin --host rs0/casdev-master.hvcc.edu,casdev-srv01.hvcc.edu,casdev-srv02.hvcc.edu MongoDB shell version v3.6.3 Enter password: connecting to: mongodb://casdev-master.hvcc.edu:27017,casdev-srv01.hvcc.edu:27017,casdev-srv02.hvcc.edu:27017/?replicaSet=rs0 2019-12-05T16:51:52.561-0500 I NETWORK [thread1] Starting new replica set monitor for rs0/casdev-master.hvcc.edu:27017,casdev-srv01.hvcc.edu:27017,casdev-srv02.hvcc.edu:27017 2019-12-05T16:51:52.564-0500 I NETWORK [thread1] Successfully connected to casdev-master.hvcc.edu:27017 (1 connections now open to casdev-master.hvcc.edu:27017 with a 5 second timeout) 2019-12-05T16:51:52.564-0500 I NETWORK [thread1] changing hosts to rs0/casdev-master:27017,casdev-srv01.hvcc.edu:27017,casdev-srv02.hvcc.edu:27017 from rs0/casdev-master.hvcc.edu:27017,casdev-srv01.hvcc.edu:27017,casdev-srv02.hvcc.edu:27017 2019-12-05T16:51:52.564-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to casdev-srv02.hvcc.edu:27017 (1 connections now open to casdev-srv02.hvcc.edu:27017 with a 5 second timeout) 2019-12-05T16:51:52.565-0500 I NETWORK [thread1] Successfully connected to casdev-master:27017 (1 connections now open to casdev-master:27017 with a 5 second timeout) 2019-12-05T16:51:52.566-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to casdev-srv01.hvcc.edu:27017 (1 connections now open to casdev-srv01.hvcc.edu:27017 with a 5 second timeout) MongoDB server version: 3.6.3 Server has startup warnings: 2019-12-04T23:09:33.704-0500 I STORAGE [initandlisten] 2019-12-04T23:09:33.704-0500 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine 2019-12-04T23:09:33.704-0500 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem rs0:PRIMARY> rs0:PRIMARY> rs0:PRIMARY> use casdb switched to db casdb rs0:PRIMARY> db.createUser( { user: "mongocas", pwd: "CAS@ccountP@55w0rd", roles: [ { role: "readWrite", db: "casdb" } ] } ) Successfully added user: { "user" : "mongocas", "roles" : [ { "role" : "readWrite", "db" : "casdb" } ] } rs0:PRIMARY> exit bye root@casdev-master:~#
Now test the account.
root@casdev-master:~# mongo casdb -u mongocas -p --host rs0/casdev-master.hvcc.edu,casdev-srv01.hvcc.edu,casdev-srv02.hvcc.edu MongoDB shell version v3.6.3 Enter password: connecting to: mongodb://casdev-master.hvcc.edu:27017,casdev-srv01.hvcc.edu:27017,casdev-srv02.hvcc.edu:27017/casdb?replicaSet=rs0 2019-12-05T17:10:28.358-0500 I NETWORK [thread1] Starting new replica set monitor for rs0/casdev-master.hvcc.edu:27017,casdev-srv01.hvcc.edu:27017,casdev-srv02.hvcc.edu:27017 2019-12-05T17:10:28.361-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to casdev-master.hvcc.edu:27017 (1 connections now open to casdev-master.hvcc.edu:27017 with a 5 second timeout) 2019-12-05T17:10:28.361-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor-0] changing hosts to rs0/casdev-master:27017,casdev-srv01.hvcc.edu:27017,casdev-srv02.hvcc.edu:27017 from rs0/casdev-master.hvcc.edu:27017,casdev-srv01.hvcc.edu:27017,casdev-srv02.hvcc.edu:27017 2019-12-05T17:10:28.362-0500 I NETWORK [thread1] Successfully connected to casdev-srv02.hvcc.edu:27017 (1 connections now open to casdev-srv02.hvcc.edu:27017 with a 5 second timeout) 2019-12-05T17:10:28.362-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to casdev-master:27017 (1 connections now open to casdev-master:27017 with a 5 second timeout) 2019-12-05T17:10:28.364-0500 I NETWORK [thread1] Successfully connected to casdev-srv01.hvcc.edu:27017 (1 connections now open to casdev-srv01.hvcc.edu:27017 with a 5 second timeout) MongoDB server version: 3.6.3 rs0:PRIMARY> exit bye root@casdev-master:~#
Integrating Into CAS
Now we have to update the pom.xml
file for the cas-overlay-template
. Add the following to the end of the dependencies.
<groupId>org.apereo.cas</groupId>
<artifactId>cas-server-support-mongo-ticket-registry</artifactId>
<version>${cas.version}</version>
</dependency>
[/xml]
Now rebuild the server.
The following is almost a direct quote from the New School docs. The extra
# # Components of the MongoDB connection string broken out for ease of editing. # See https://docs.mongodb.com/reference/connection-string/ # # For passwords the include colon or @, they must be URL encoded! # mongo.db: casdb mongo.rs: rs0 mongo.opts: &ssl=false mongo.creds: mongocas:CAS%40ccountP%4055w0rd mongo.hosts: casdev-master.hvcc.edu,casdev-srv01.hvcc.edu,casdev-srv02.hvcc.edu # # The connection string, assembled # mongo.uri: mongodb://${mongo.creds}@${mongo.hosts}/${mongo.db}?replicaSet=${mongo.rs}${mongo.opts} # # Ticket registry # cas.ticket.registry.mongo.clientUri: ${mongo.uri}
Nginx Configuration
On your HA server, install Nginx.
apt install nginx
Create a new /etc/nginx/sites-available/HVCC
file with the following
server { listen 80 default_server; return 301 https://$host/cas; } upstream casgroup { server casdev-master.hvcc.edu:8443; server casdev-srv01.hvcc.edu:8443; server casdev-srv02.hvcc.edu:8443; } server { # Yes we are NGINX, but they do not need to know which version. server_tokens off; gzip off; # Only listen on 443 and support http2. listen 443 ssl http2; server_name casdev.hvcc.edu; ssl_certificate /var/lib/nginx/casdev-bundle.crt; ssl_certificate_key /var/lib/nginx/casdev.key; ssl_certificate /var/lib/nginx/casdev_hvcc_edu-secp384r1-bundle.crt; ssl_certificate_key /var/lib/nginx/casdev_hvcc_edu-secp384r1.key; # Security stuff... ssl on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_protocols TLSv1.3 TLSv1.2; #ssl_protocols TLSv1.2; ssl_ciphers EECDH+AESGCM:EDH+AESGCM; #ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384; #ssl_ciphers TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES256-GCM-SHA384:!AES128; ssl_prefer_server_ciphers on; ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0 ssl_dhparam /etc/nginx/dhparam-4096.pem; ssl_stapling on; # Requires nginx >= 1.3.7 ssl_stapling_verify on; # Requires nginx => 1.3.7 resolver 151.103.16.146 151.103.16.12 valid=300s; resolver_timeout 5s; # Headers for better security overall! #add_header Strict-Transport-Security "max-age=31536000 ; includeSubDomains"; #add_header X-Frame-Options DENY; #add_header X-Content-Type-Options nosniff; #add_header X-XSS-Protection "1; mode=block"; location / { proxy_cache_revalidate on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Strict-Transport-Security "max-age=31536000 ; includeSubDomains"; proxy_set_header X-Frame-Options DENY; proxy_set_header X-Content-Type-Options nosniff; proxy_set_header X-XSS-Protection "1; mode=block"; proxy_pass https://casgroup; proxy_read_timeout 90; } error_page 404 /custom_404.html; location = /custom_404.html { root /usr/share/nginx/html; internal; } error_page 500 502 503 504 /custom_50x.html; location = /custom_50x.html { root /usr/share/nginx/html; internal; } }
You will most likely have to make changes to the casgroup
and the server name, especially when building the production version. This config is subject to change, so you may need to revisit this location for updated content.
Create dhparam file.
openssl dhparam -out /etc/nginx/dhparam-4096.pem 4096
Now unlink the default
and link the HVCC
.
rm /etc/nginx/sites-enabled/default ln -s /etc/nginx/sites-available/HVCC /etc/nginx/sites-enabled/ service nginx restart
If you receive any errors, be sure to check your configuration file for mispellings or other problems such as incorrect certificate file names.