Migrating SonarQube from MySQL to PostgreSQL using Docker

My team has been running SonarQube for a long time now, and we’ve been using MySQL this entire time. But the new versions of SonarQube no longer support MySQL, so we were forced to make the move. SonarQube does provide a tool to help out with migration, but it was a lot of steps so I decided to document them.

First I’ll describe our current situation. We’re running both SonarQube and MySQL in Docker containers (no K8s). The containers map volumes to the host for the database and SonarQube where we store the data and plugins. We were running version 7.7-community of SonarQube and version 7.5 of MySQL on RHEL. The original docker-compose file looks like this (there are variables in the file that get replaced at run time):

version: "2"
services:
  sonarqube:
    image: sonarqube:7.7-community
    command: -Dsonar.web.context=/sonar
    container_name: ds-sonarqube
    ports:
      - "9000:9000"
    links:
      - "db:database"
    restart: always
    environment:
      - sonar.jdbc.username=$DB_USER
      - sonar.jdbc.password=$DB_USER_PASS
      - sonar.jdbc.url=jdbc:mysql://database:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfig$$
    volumes:
      - /opt/sonar/extensions/plugins:/opt/sonarqube/extensions/plugins:rw
      - /opt/sonar/logs:/opt/sonarqube/logs:rw

  db:
    image: mysql:5.7.23
    command: --max_allowed_packet=256M
    container_name: ds-sonarqube-db
    ports:
      - 3306
    volumes:
    - /opt/sonar/db:/var/lib/mysql:rw
    restart: always
    environment:
      # MYSQL ENV
      #
      - MYSQL_ROOT_PASSWORD=$DB_ROOT_PASS
      - MYSQL_DATABASE=sonar
      - MYSQL_USER=$DB_USER
      - MYSQL_PASSWORD=$DB_USER_PASS

I used a backup of the MySQL database to run this process locally on my machine before trying it on the server. Here are the steps and detail I followed:

  1. At a minimum, backup your MySQL database before starting. Since my server is on AWS, I also took a snapshot to be safe.
  2. Take down the current SonarQube and MySQL containers and remove them. Since the data and configuration is mapped to a volume, nothing will be lost.
  3. Create a new standalone MySQL container:
    docker run -d \
    -p 3306:3306 \
    -v /opt/sonar/db:/var/lib/mysql:rw \
    -e MYSQL_DATABASE=sonar \
    -e MYSQL_USER=$DB_USER \
    -e MYSQL_ROOT_PASSWORD=$DB_ROOT_PASS \
    -e MYSQL_PASSWORD=$DB_USER_PASS \
    mysql:5.7.23
    

    Note that the volume maps to the location used by SonarQube

  4. Create a new docker-compose file for SonarQube using PostgreSQL. Make sure you use the same version of SonarQube that you were using with MySQL. The upgrade step for SonarQube comes later. Here is what the new file looks like:
    version: "2"
    services:
      sonarqube:
        image: sonarqube:7.7-community
        command: -Dsonar.web.context=/sonar
        container_name: ds-sonarqube
        ports:
          - "9000:9000"
        links:
          - "db:database"
        restart: always
        environment:
          - sonar.jdbc.username=$DB_USER
          - sonar.jdbc.password=$DB_USER_PASS
          - sonar.jdbc.url=jdbc:postgresql://database:5432/sonar
        volumes:
          - /opt/sonar/extensions/plugins:/opt/sonarqube/extensions/plugins:rw
          - /opt/sonar/logs:/opt/sonarqube/logs:rw
          - /opt/sonar/conf:/opt/sonarqube/conf:rw
          - /opt/sonar/extensions:/opt/sonarqube/extensions:rw
          - /opt/sonar/data:/opt/sonarqube/data:rw
    
      db:
        image: postgres:12.1
        container_name: ds-sonarqube-db
        ports:
          - 5432
        volumes:
        - /opt/sonar/postgres:/var/lib/postgresql/data:rw
        restart: always
        environment:
          - POSTGRES_DB=sonar
          - POSTGRES_USER=sonar
          - POSTGRES_PASSWORD=$DB_USER_PASS
    
  5. Start the new containers and let SonarQube initialize the empty PostgreSQL database.
  6. Download the migrator tool and extract it. Make sure you have a Java 1.8+ SDK (not just a JRE) installed as the SDK is required for the tool.
  7. Stop the SonarQube container, but not the PostgreSQL container.
  8. In the directory where you mapped “/opt/sonarqube/data” (“/opt/sonar/data’ in my case), delete the es6 directory.
  9. In the migrator tool directory, create a file called old.properties. In that file add the following:
    sonar.jdbc.url = jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance&useSSL=false
    sonar.jdbc.username = <sonar_user_name>
    sonar.jdbc.password = <sonar_user_password>
  10. In the migrator tool directory, create another file called new.properties. In that file add the following:
    sonar.jdbc.url = jdbc:postgresql://localhost:32776/sonar
    sonar.jdbc.username = <sonar_user_name>
    sonar.jdbc.password = <sonar_user_password>

    I got the port number from Docker. Since we use credstash for secrets, I had to get the username/passwords from there to put in these files. Make sure you get rid of these files when you are done.

  11. Run the tool:
    ./run.sh -source old.properties -target new.properties
  12. Restart the SonarQube container. If it worked, once SonarQube starts up you should be able to log back in as before.
  13. Prepare to upgrade SonarQube. Increase the max_map_count setting on the server to handle the Elastic search version with SonarQube 7.9+:
    sudo sysctl -w vm.max_map_count=262144
  14. Take down the running SonarQube and PostgreSQL containers and remove them
  15. In your docker-compose file, change the version of SonarQube to 7.9-community.
  16. Restart the containers using the edited docker-compose file
  17. When SonarQube starts, it will be in maintenance mode. You need to upgrade the database to a the new version using https://<YourSonarUrl>/setup. The database will upgrade and SonarQube should start normally.
  18. Login as an administrator and upgrade any plugins

Now your installation should be good to go and ready to upgrade to version 8 of SonarQube when it hits LTS status.

HTTPS Locally in Vue.js

As I have been experimenting with FIDO2 and Yubikey, I discovered that in order to test it running locally, I needed to run the site with SSL/TLS. Vue does support this, but there were a few steps:

  1. Create certificates Locally
  2. Trust the new certificates
  3. Add the certificates to the project

There are plenty of articles detailing the first two steps, here’s two:

After creating the Certificate Authority and certificates, you should end up with three files, the CA certificate, a TLS certificate, and a private key file. Besides these three files, it’s very important that you trusted the CA certificate in your operating system. Create a new directory in the root of your web project and call it “certs”. Move all three files to that directory.

If your project does not have a vue.config.js file, create one in the root of the project. Vue uses this file to hook into WebPack when running and building your site. Add the following to that file:


'use strict'

const fs = require('fs')
const appRoot = process.cwd()

module.exports = {
  devServer: {
    host: 'localhost',
    port: 8080,
    https: {
      key: fs.readFileSync(`${appRoot}/certs/server.key`),
      cert: fs.readFileSync(`${appRoot}/certs/server.crt`),
      ca: fs.readFileSync(`${appRoot}/certs/rootCA.pem`)
    },
    hotOnly: true
  }
}

Of course, you’ll need to substitute your filenames for the files in my code snippet. The snippet allows WebPack to use the Node.js built-in fs library and it acquires the current working directory (cwd) in order to get an absolute path to your certificate files. The WebPack devServer then uses those to serve up your site locally over https.

Flutter Initial Impression

Flutter has been getting a lot of positive press in my social media feeds. I listened to an episode of Engineering Daily which interviewed Eric Seidel, one of the founders of Flutter.

I was extremely disappointed that they chose not to implement any hardware interfaces, like accelerometer, GPS, etc. Flutter really just cares about the UI. They are letting “the community” build plug-ins for all the hardware interactions because those vary by and sometimes within a platform. So now Flutter apps will be like PhoneGap and WordPress. Soon (if not already) there will be a dozen GPS plugins, all with slightly different feature sets, wildly different degrees of quality, and patchy support for different devices within a platform (think of the camera in Android devices). Judging between these plugins will be difficult, and in many cases plugins are unable to be used by certain types of clients, like the federal government, due to security concerns and vetting processes. So in those cases you will just write your own. The cost of a cross-platform app is not going to be lessened by using Flutter. At this point I’d say it’s OK for CRUD or brochureware, but not much else. If your app doesn’t interact with the hardware, why not just build a web site?

I’ll still write some code with it, but I’m disappointed with the reach of the platform into the devices it runs on.

Handling MongoDB Secrets At Build Time with Docker

Secure Database Image

CI/CD is an important part of our team’s process. We regularly build small back-end web services to support the apps we build. We often use a combination of Node.js and MongoDB as a stack for services, and deploy them using Docker. As we keep ramping up our secure practices, we have been removing secrets embedded in projects using CredStash to store and serve our secrets. Since our servers are hosted at AWS, this is a good solution for us. CredStash is another topic entirely, but my team has it set up on their workstations and the build servers are configured to use it as well. The solution presented here uses CredStash, but it could be any other secret storage system as long as you can access the system via a shell script. The important part is that the secrets are acquired at build time and not persisted anywhere on the build server or in the resulting containers.

We want to be able to setup and populate the database at creation, and also to reset the database to its initial state during the development process. The best case scenario is to simply do this during the build process, so no one has to log into the server to do any setup or cleanup. Just kick off a build, which anyone on the team can do.

We want to run MongoDB in authenticated mode because it’s a problem otherwise. Security is a focus for us. There are four things we need to accomplish with the MongoDB database:

  1. Create an Administrative user
  2. Create a service account for the web service app
  3. Populate the collections with initial state
  4. Allow the database to be set back to initial state

Our build system is based on Jenkins, and we are using Pipelines so our build job configuration is persisted in source control. Our build script defines multiple build types, for the purposes of this article we’ll focus on two of them, Development and Reset, where Development is the default and happens on every git push, and Reset is used to set the database back to this initial state. I’m not going to cover all aspects of the build job, but we are using Docker Compose to create a container for a Node app and a container for MongoDB.

Let’s start with a script to populate the database. We’ll create a JavaScript file with all the base data being inserted something like this, but with many more lines and collections:

db.request.remove({});

db.request.insertOne({
	"caseNumber": "1234567890",
        "zipCode": "48439",
        "requestLeniency": false,
        "firstName": "Andy",
        "lastName": "Smith",
        "issueDate": "04/01/2018"
        });
        
db.request.insertOne({
	"caseNumber": "987654320",
        "zipCode": "48185",
        "requestLeniency": true,
        "firstName": "Emily",
        "lastName": "Juarez",
        "issueDate": "04/07/2018"
        });

The first line removes any existing documents for the collection so we can use this file to reset the collection later. On first run this line has no effect. This file gets saved as setup.js in the project repository. The Dockerfile for the MongoDB container adds this file into the image so the file is part of the container:

FROM mongo:3.4

ADD setup.js /data/setup.js

In our docker-compose.yml we are mapping the /data/db of the MongoDB container to the file system of the Docker host in order to persist the database data across builds. You could achieve something similar with a data container. In this example the database is called sample and the MongoDB container is called test-db.

The next step is to persist the secrets in CredStash. I create strong passwords then persist them to CredStash using my terminal:

credstash put db-admin StrongPassword
credstash put db-user AnotherStrongPassword

Now create a shell script that creates users and populates the database. First set up the accounts and get their passwords from Credstash:

ADMIN_USER="admin"
ADMIN_PASS="$(credstash get db-admin)"
APP_USER="service"
APP_PASS="$(credstash get db-user)"

Next, assuming it’s a first-time build of the container, create an admin user and a service account:

docker exec test-db mongo admin --eval "db.createUser({user: '${ADMIN_USER}', pwd: '${ADMIN_PASS}', roles:[{role:'root',db:'admin'}]});"

docker exec test-db mongo sample -u $ADMIN_USER -p $ADMIN_PASS --authenticationDatabase admin --eval "db.createUser({user: '${APP_USER}', pwd: '${APP_PASS}', roles:[{role:'dbOwner', db:'sample'}]});"

After creating the users the script creates a file we can look for on later builds in order to skip creating the users again because that won’t work on subsequent builds. Also, the script restarts the application container, because it won’t be able to connect initially since the container is up and running before the database users are set up.

docker exec test-db touch /data/db/.mongodb_password_set

docker restart test-service

The last step is to populate the database with initial state. The JavaScript file is in the container already.

docker exec test-db mongo sample -u $APP_USER -p $APP_PASS --authenticationDatabase courthack /data/setup.js

Here is the entire script (data.sh) including all the checks for different states:

#!/usr/bin/env bash

ADMIN_USER="admin"
ADMIN_PASS="$(credstash get db-admin)"
APP_USER="service"
APP_PASS="$(credstash get db-user)"

FOUND=0
docker exec test-db ls /data/db/.mongodb_password_set || FOUND=$?

echo ""
echo "Previous DB setup file found: ${FOUND}"
echo ""

if [ "$FOUND" -ne "0" ]; then

	echo ""
	echo 'DATABASE:  *** Creating Admin User ***'
	echo ""
	sleep 1

	docker exec test-db mongo admin --eval "db.createUser({user: '${ADMIN_USER}', pwd: '${ADMIN_PASS}', roles:[{role:'root',db:'admin'}]});"

	echo ""
	echo 'DATABASE:  *** Creating App User ***'
	echo ""
		
	sleep 2

	docker exec test-db mongo sample -u $ADMIN_USER -p $ADMIN_PASS --authenticationDatabase admin --eval "db.createUser({user: '${APP_USER}', pwd: '${APP_PASS}', roles:[{role:'dbOwner', db:'sample'}]});"

	docker exec test-db touch /data/db/.mongodb_password_set

	sleep 1
	
	docker restart bench-service
fi

if [ "$FOUND" -ne "0" -o "$BUILD_TYPE" = "Reset" ]; then
	echo ""
	echo 'DATABASE:  *** Loading or Restoring Initial Data ***'
	echo ""
	
	docker exec test-db mongo sample -u $APP_USER -p $APP_PASS --authenticationDatabase courthack /data/setup.js
fi

echo ""
echo "DATABASE:  *** Database Setup Complete ***"
echo ""

In the Jenkinsfile.groovy configuration of the pipeline, one of the steps can simply be to execute the above shell script:

// Lots of stuff omitted for brevity....

static final String BUILD_TYPE_DEV = 'Development'
static final String BUILD_TYPE_RESET = 'Reset'

properties(
    [
        parameters([choice(choices: [BUILD_TYPE_DEV, BUILD_TYPE_RESET], description: 'Reset restores the database to starting state', name: 'BUILD_TYPE')]), 
    ])

    
switch (params.BUILD_TYPE) {
	case BUILD_TYPE_DEV:
		// Do all the dev build stuff (omitted for brevity)
		break
	case BUILD_TYPE_RESET:
	   stage('Data Reset') {
	   		setExecutePermissions()
	   		sh './data.sh'
	   }
		break
	default:
		echo "Unsupported build type!"
		currentBuild.result = 'FAILURE'
		break
}

void setExecutePermissions() {
    echo "Setting Execute Permissons"
    script {
        sh 'chmod +x data.sh'
    }
}

So now the MongoDB database has two users set up and the passwords are not in either the build scripts or in an environment variable. They only existed in memory during build time. And the database ie pre-populated for development, and can be reset any time.

Github repo with sample project

Set Up Android Accessibility Tests Using Espresso

Espresso Logo
The Espresso documentation has a good simple example of how to set up Accessibility Tests in Android. By including AccessibilityChecks, your tests run a number of rules against the activity/fragment under test to ensure it’s accessibility. The tests fail if all the rules do not pass. The basic gist is that you add a @BeforeClass annotated method which calls AccessibilityChecks.enable():

@BeforeClass
public static void enableAccessibilityChecks() {
    AccessibilityChecks.enable();
}

You are supposed to enable this in Espresso 3 by adding the following dependencies to your build.gradle file:


androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.1'
androidTestImplementation 'com.android.support.test:runner:1.0.1'
androidTestImplementation 'com.android.support.test.espresso:espresso-accessibility:3.0.1'

Unfortunately, I have not been able to make it work due to an error in the Espresso library.

Espresso 3.0.1 Broken

The setup described in the Android Documentation results in a run-time error if you include the espresso-accessibility library referenced in the documentation:

Error:Error converting bytecode to dex:
Cause: com.android.dex.DexException: Multiple dex files define Landroid/support/test/espresso/accessibility/R$attr;

This issue was reported on Stack Overflow, but the one answer did not work for me. In the Google Issue tracker a response implies the problem is fixed in v3.0.2. I was unable to get my hands on that version to test it out.

In order to solve the problem, I had to roll back the Espresso libraries to version 3.0.0 in build.gradle:

Espresso 3.0.0 Broken

Turns out this version of Espresso is also broken, but in a different way. It’s missing a transitive dependency on Guava. To get Espresso 3.0.0 to work, you need to add the missing dependency on Guava into your build.gradle:


androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.0'
androidTestImplementation 'com.android.support.test:runner:1.0.0'
androidTestImplementation 'com.android.support.test.espresso:espresso-accessibility:3.0.0'
androidTestImplementation 'com.google.guava:guava:20.0'

I published a simple example project demonstrating an Espresso UI test that includes Accessibility checks on Github. The project’s one UI test actually fails, so you can see what the output looks like when an accessibility check fails. There is a comment in activity_main.xml where the accessibility problem lies. The “broken” branch has the project set up for Espresso 3.0.1 so you can see that error. Hopefully Google pushes 3.0.2 soon.

Be Consistent Not Uniform

Android UI is not iPhone UI

A common shortcut often taken is to make one UI work on multiple platforms, and I find myself fighting this misconception often. My thoughts on this were spurred by an article Mobile First, Desktop Worst, which basically takes on mobile first and responsive design as being flawed. I think many of the arguments presented in that article were also relevant to building mobile UIs for an app that exists on both iOS and Android.

UI development is the most time consuming aspect of developing mobile apps. As the app owner you may want your app to be the same on all platforms to try and minimize the work, but this thinking is wrong. Individual users don’t use your app on multiple platforms and expect a common experience, they use your app on their chosen platform and expect it to act like other apps on their chosen platform. An application which does not adopt the UI conventions of the target platform will have diminished success. Users now expect applications to match their platform experience, especially millennials or users who don’t have experience in other platforms.

Each platform has controls, widgets and interaction paradigms that do not exist on other platforms. One UI that works across all platforms will not take advantage of the unique features of each platform, becoming a compromised design that does not meet user’s expectations. The differences in each platform are typically what makes a quality user experience for that platform. The least-common-denominator approach of the sameness across platforms reduces the potential for adoption and success of an app on any given platform. Mobile usability and a fabulous UX are now expected, and your app won’t achieve that goal unless it exploits the platform and device features.

Many of the cross-platform tools make this mistake out of the gate and promise a two-for-one outcome, which is fallacious thinking. Don’t fall into this trap.

Apache HTTP Client in Android API 24+

Google had been telling us for quite some time that the Apache HTTP Client was deprecated and going to be removed, and with the release of Marshmallow (API 24) this came true. For the vast majority of developers, this is a non-issue. The HttpUrlConnection object is more than adequate. But for some enterprise developers, this presented a problem. The HttpUrlConnection does not support NTLM (Microsoft Active Directory) authentication. But the Apache client does. If you are using or want to use the Apache client, it is still possible. Simply add this to the android section of your build.gradle:

android {
    useLibrary 'org.apache.http.legacy'
}

Now when you build the Apache client becomes available again. It’s still marked as deprecated and Android Studio will complain, but it definitely works.