Blog

Zephyr, Jira, and Pydantic models

 

In this post, we will be talking and reviewing my recent implementation of the Zephyr Squad API. Zephyr is a plugin to Atlassian's Jira for test case management and test case automation. I worked on integrating with this API because Zephyr is the main test case management tool used by the OEM client. The OEM uses Zephyr to track test cases for different release cycles then pulls reports to determine the readiness of a software version for the control module. This implementation is actually part of the same project I described in my websockets post. My task and focus for this phase is to get test case execution data and IDs to make execution updates (test results updates) to test cases based on a CSV of results of the automated tests performed on the hardware side. The plan is to receive this CSV file, iterate through the results, match the test cases to the ones in Zephyr, and make the appropriate updates.

To achieve this, I set the following milestones:

  1. Generate a set of API keys through the Atlassian UI to make calls to the zephyr API
  2. Generate the necessary JWT for making calls to the API
  3. Using the JWT to initiate pulling and pushing test case and test execution data to and from the server
  4. Use pydantic to define data models (dto) to manage the data for more efficient processing

I will not cover #1 in detail as it is a straight forward process of going to the zephyr squad settings in the UI and generating a set of keys.

Generating the JWT

For me, generating the correct JWT payload was the most challenging part of the project. Not because generating a JWT is complicated, but mainly because the documentation for generating a JWT, for using the Zephyr API, wasn't all that clear to me and it required quite a bit of reverse engineering of their github code to really understand what their API is expecting. After breaking it down, generating the JWT required 3 major pieces.

Contstruct the cannonical path

The first is constructing the cannonical path for the API call. The canonical path URL is a combination of the endpoint URL (without the domain), the HTTP method, and any query parameters. If there are no query parameters, the cannonical path simply ends with a &. This is also described in my docstrings for my generate JWT method.

"""
        :param canonical_path: Needs to be formatted as <METHOD> + relative_path + query params.
        Ex. Relative path is /public/rest/api/1.0/cycle/{cycle_id} which is then used in the final canonical path as
        "'GET&' + relative_path + f'&cycleId={cycle_id}&projectId={project_id}&versionId={version_id}"
"""

Set header conditions

This is where the cannonical path is used. A hashed cannonical path is used as the qsh (query string hash). The server validates the request using this qsh value. If there is a mismatch, the server returns an invalid qsh error. The account id is the jira account ID. The access key is generated from the Zephyr portal under API keys.

payload_token = {
            'sub': config["zephyr.api"]["account_id"],
            'qsh': hashlib.sha256(canonical_path.encode('utf-8')).hexdigest(),
            'iss': config["global"]["access_key"],
            'exp': int(time.time()) + int(config["zephyr.api"]["jwt_expire"]),
            'iat': int(time.time())
        }

Generate and use the JWT

After the headers are configured, we generate the JWT with a simple return command.

return jwt.encode(payload_token, config["global"]["secret_key"], algorithm='HS256')

Making API calls

With the JWT, we can now make calls to various endpoints by setting the authorization header for JWT.

headers = {
        'Authorization': 'JWT ' + zephyr_jwt,
        'Content-Type': 'application/json',
        'zapiAccessKey': config["global"]["access_key"]
    }

A simple call to the requests library for the execution:

json_body = {"zqlQuery":"project = '" + project + "' AND cycleName = '"+cycle_name+"'"}
response = requests.post(config["zephyr.api"]["prod_base_url"] + relative_path, headers=headers, json=json_body, params=query_params)
response_data = response.json()

Models

Building Data Transfer Object, DTO, classes is good practice. DTOs are a whole topic by themselves, so to learn more, check out this wikipedia page. For me, I always use DTOs due to their proper organization of database table objects and when a DTO is created with that data, the DTO is a great tool to retreive, modify, or aggregate that data without futher hits to the database. With zephyr, I don't have direct access to their database schemas. Therefore, I created the DTO classes based on the responses from the Zephyr API documentation. It gives me a lot of freedom to navigate the data and add custom functions to the object. Here is a short example of the API response being mapped to a DTO then executing a search for a specific issue key:

# get_execution_navigation_result returns an instance of the cycle dto 
cycle = get_execution_navigation_result(project='test-sample-name', cycle_name='test-cycle-name')
# iterate through results and search for a specific issue key
matching_cases = [tc for tc in cycle.searchObjectList or [] if tc.issueKey == "TSN-1"]
test_case = matching_cases[0]
test_case_execution = test_case.execution

For this project, I used a library called Pydantic for creating object models. It is an excellent library that provides an array of useful classes and methods, such as the python model_dump() method which I use regularly for serializing and applies automatic mapping of variables to a new model object by using the input parameter python BaseModel.

Conclusion

I feel this was a pretty routine and simple implementation of an API. In the past, I have built larger scale, distributed API solutions and data retreival algorithms in several different industries. This allowed me to quickly set up this mini app with industry standard implementations and practices. In the end, this effort was a success and I am now able to update test cases quickly and efficiently, much quicker than when we have to do it manually through the UI saving us time and dollars.

To see the full code, DTOs, and implementation, head on over to my github.

As always, thank you for reading!

  • Rami