Report Aggregation

In the previous milestone, we discussed the details of our endeavor in parallelizing builds in Jenkins at the individual test case level.  As discussed, this allowed us to provide test coverage in a quick and scalable way. The only limitation we came across is the question of how many inexpensive Jenkins slaves we wanted to spin up at any given time.  As far as efficiency and speed go, we’ve reached a new landmark at TrueCar for continuous testing. With that accomplishment, we are faced with a new challenge: reporting.

Imagine over 100 tests being executed, and having a small handful of those tests failing.  With our matrix job setup, each test will get its own individual report.  This means that we’d have to scour through the sea of test cases to find the few that went red before investigating what went wrong.  This can be a frustrating and time-consuming task.

The solution we’ve come up with is to build our own top-level HTML report that aggregates all the reports into one location.  The report provides basic pass/fail information along with a link to the actual report per test case.  Though not a perfect solution, it offers a single page to view and access links to broken tests.

Building the Post-Build Script

This script does most of the heavy lifting when it comes to aggregating all the pertinent data that we wanted to add to our new report.  Essentially, it loops through each of the TNAME values, builds a URL that points to each test case’s archived JSON report file in Jenkins, and processes it in a similar way to what we did in the init script mentioned in Milestone 2.  The data is then encoded for easier Bash processing later when we build the actual report.  To be clear, this script does not generate the HTML report, but only prepares the data for passing to a separate triggered job that we created solely for building aggregated reports.

## assistance/jenkins/parallel_init.sh
source ./jenkins/common.sh

mkdir -p ${BUILD_TAG}

# Process any additional axes
process_permutations

# Read JSON reports from each test and get all pertinent info
acquire_test_data


if [ -n "${FAILURE_FOUND}" ]; then
   echo "PASSED=false" > ${BUILD_TAG}/env.props
else
   echo "PASSED=true" > ${BUILD_TAG}/env.props
fi

echo "TNAMES=${tnames}" >> ${BUILD_TAG}/env.props
echo "PERMUTATIONS=${permutations}" >> ${BUILD_TAG}/env.props
echo "TEST_DATA=${test_data[@]}" >> ${BUILD_TAG}/env.props
echo "DEPLOY_BUILD_URL=${BUILD_URL}" >> ${BUILD_TAG}/env.props
echo "DEPLOY_BUILD_ID=${BUILD_ID}" >> ${BUILD_TAG}/env.props
echo "DEPLOY_JOB_URL=${JOB_URL}" >> ${BUILD_TAG}/env.props
## assistance/jenkins/common.sh
function process_permutations() {
  tnames="${TNAMES:-default}"

  if [[ -n ${ANOTHER_AXIS} ]]; then
    for axis in ${ANOTHER_AXIS}; do
      permutations="
        ${permutations} ${tnames// /,ANOTHER_AXIS=${axis} },ANOTHER_AXIS=${axis}"
      done
  else
      permutations="${tnames}"
  fi
}

function acquire_test_data() {
  if [ -n "${permutations}" ]; then
    test_data=()
    for tname in ${permutations}; do
      url="${JOB_URL}TNAME=${tname////%2F}/${BUILD_ID}/artifact/report.json"

      temp=$(ruby -e "
        require 'json'
        require 'open-uri'
        data = JSON.parse(open('${url}') { |f| f.read })
        test_data = []

        # Acquire pass/fail and test name information
        test_data += data.map do |f|
          failed = f['failed'] ? '✘' : ''
          feature = f['class']
          scenario = f['name'].capitalize
          "#{failed}^#{feature}^#{scenario}^${tname}"
        end.uniq

        puts test_data.join('-_-')
      ")

      # Encode spaces within test cases and decode spaces
# that delimit test cases

      temp=${temp// /**}
      temp=${temp//-_-/ }

      # Add the values to the array we defined above
      test_data+=(${temp})

      if [[ $temp =~ .*#10008.* ]]; then FAILURE_FOUND=true; fi
    done
  fi
}

Building the HTML Report Generator Script

In the end, we became so comfortable in using Bash that we even used it to generate the HTML report.  We built a few functions to process the data that we passed from the Post-Build script, and used that data to generate each row within an HTML table. The table is then injected into the short HTML block that is defined at the bottom of the Bash script.  I’ll be the first to admit that there are likely more elegant solutions, but this approach seemed to be the easiest and most lightweight solution at the time.

## assistance/jenkins/aggregated_report.sh
TEST_DATA_ARR=($TEST_DATA)

build_rows() {
  if [ -n "${TEST_DATA}" ]; then
    for test_data in ${TEST_DATA}; do
      IFS=^ read -ra test_meta <<< "${test_data//**/ }"
      failed="${test_meta[0]}"
      new_feature="${test_meta[1]}"
      scenario="${test_meta[2]}"
      tname="${test_meta[3]////%2F}"
      url="${DEPLOY_JOB_URL}TNAME=${tname}/${DEPLOY_BUILD_ID}/artifact/report.html"

      # If feature information is different from last,
# add new feature header

      if [ "$new_feature" != "$feature" ]; then
        feature=${new_feature}

        echo "<tr>"
        echo "    <td class='col-xs-3' colspan='4'>"
        echo "        <strong>Feature: ${feature}</strong>"
        echo "    </td>"
        echo "</tr>"
      fi

      # Build out the table row
      echo "<tr>"
      echo "    <td>&bull;</td>"
      echo "    <td class='col-xs-9'>${scenario}</td>"
      echo "    <td class='col-xs-1' style='text-align: center;'>"
      echo "        <font color='red'>"
echo " <strong>${failed}</strong>"
echo " </font>"

      echo "    </td>"
      echo "    <td class='col-xs-2'>"
echo " <a href='${url}'>Report Link</a>"
echo " </td>"

      echo "</tr>"
    done
  fi
}

pass_or_fail() {
  if [ "${PASSED}" = "true" ]; then
    echo "<span style='color:green'>Passed</span>"
  else
    echo "<span style='color:red'>Failed</span>"
  fi
}

cat <<- _EOF_
<html>
  <head>
    <title>Aggregated Tests Reporting</title>
    <link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/css/bootstrap.min.css" rel="stylesheet" />
    <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.4/jquery.min.js"></script>
    <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/js/bootstrap.min.js"></script>
    <script src="https://code.jquery.com/ui/1.10.3/jquery-ui.js"></script>
  </head>
 <body>
    <div class="container">
      <div class="jumbotron">
        <h1>Test Results: $(pass_or_fail)</h1>
        <hr>
        <p style="margin:0">
<strong>
Tests Ran: </strong>${#TEST_DATA_ARR[@]}
</p>
        <p style="margin:0">
          <strong>Job Link: </strong>
          <a href="${DEPLOY_BUILD_URL}">
${DEPLOY_BUILD_URL}
</a>
        </p>
     </div>
      <div class="panel panel-default">
        <div class="panel-heading">
          <strong>Tests Ran</strong>
        </div>
        <div class="panel-body">
          <div class="table-responsive">
            <table class="table">
             <thead>
               <tr>
                 <th></th>
                  <th>Scenario</th>
                  <th style='text-align: center;'>
Failed?
</th>
                  <th>Test Results Link</th>
                </tr>
              </thead>
              <tbody>
               $(build_rows)
              </tbody>
            </table>
          </div>
        </div>
      </div>
    </div>
  </body>
</html>
_EOF_

Building the Notification Script

In this step, we simply built a script that generates and runs your cURL command, which calls the Notification service’s API (if available).  Ours looks like this…

## assistance/jenkins/slack_report.sh
if [ "${PASSED}" = "true" ]; then
   COLOR="good";
   STATUS=PASS;
else
   COLOR="danger";
   STATUS=FAIL;
fi

for channel in ${channels}; do
 curl -X POST
 --data-urlencode 'payload={
   "channel": "#qateam",
   "username": "jenkins",
   "icon_emoji": ":jenkins:",
   "attachments": [
     {
       "color": "'"${COLOR}"'",
       "fields": [
         {
           "title": "'"${DEPLOY_JOB_URL##*job/}"' '"${STATUS}"'",
           "value": "Build #${DEPLOY_BUILD_ID} (<${BUILD_URL}artifact/report.html|Open>)"
         }
       ]
     }
   ]
 }' https://hooks.slack.com/services/xxxxxxxxx/XXXXXXXXX/000000000000000000000000
done

Building and Plugging in the Report Aggregator Jenkins job

Finally, we needed to add these scripts to Jenkins.  First, we created the job that would be used simply for HTML report generation and sends a notification that links to the report.  The build step simply calls the script and injects the output into an HTML file.  The Post-Build actions include archiving the newly created HTML file and calling the notification script.

Jenkins Job: aggregated_reporting

After creating the job, we needed to trigger it from our original job.  In the Post-Build section of the original job, we opted to “Execute a set of scripts”.  The first action runs the post-build script that we wrote.  The second step triggers the newly created HTML reporter job while passing in the properties file we’ve been using from the beginning along with any current build parameters.

At the very bottom, we unchecked the box that would only “Execute script only if build succeeds.” Lastly, we made sure to “Execute script on” the MATRIX only.  With that, we are finished with report aggregation!

With this new setup, our tests run quickly and are easily viewable through a single report screen.  With the basic concept of parallelized test builds completed, we are left with applications, optimizations and enhancements.

One application of this concept can be seen in a couple jobs that run as post-deploy triggers of release candidates for varying apps as they move from QA to Staging to Production.  If a test fails within one of these triggered jobs, the release candidate is blocked from moving further until tests are resolved.  These jobs, therefore, act as gatekeepers to production.

 

Our next and final post is going to focus on an enhancement to our gatekeeper jobs.  Even though we can run hundreds of tests at once as part of normal test jobs, we do not necessarily want every one of these regression tests to run as part of our gatekeepers.  The final milestone will address our approach in differentiating the run strategy and behavior between critical path and regression tests within our gatekeeper jobs.