Karate Coding Standards & Best Practices
Coding standards and best practices are very important, especially as applications scale up and become more complex. Adhering to these guidelines enhances readability, maintainability, efficiency, collaboration, knowledge sharing etc. By adopting a standardized approach, we can ensure that our test codebase is easily understandable, reusable, and adaptable to changes in requirements.
- 1 Follow Scenario Format & Structure
- 2 Ensure Independent Scenarios
- 3 Use Table-Driven Approach Instead of Scenario Outline
- 4 Avoid parallel=false Indicator In The Features
- 5 Add Commentary On Scenarios
- 6 Add numerated print statements within a Scenario
- 7 Prioritize Simplicity & Clarity, Favor Readability Over Complexity
- 8 Employ Reusable Functions As Much as Possible
- 9 Avoid Very Large Feature Files
- 10 Externalize Huge JSON Objects
- 11 Curb Use of Global Variables
- 12 Do Not Repurpose the Same Identifiers
- 13 Asserting Objects/Collections In a Response
- 14 Naming Conventions
Follow Scenario Format & Structure
Use tags to categorize and group related scenarios
Each feature should be written in a way such that the file will receive input(s) like credentials, tokens, reference data identifiers. Other prerequisites like tenant creation can be composed with the feature. The feature file should look like it can executed in an existing tenant or a new tenant(ignoring reference data dependencies).
Store variables linking external references like JSON files & reusable functions at the top of the feature file, preferably in the
Background
section.Print Scenario names during the test execution by adding
* print karate.info.scenarioName
at the beginning of the Background block.Use numerated
print
statements for easier debugging and readability.
Ensure Independent Scenarios
Design scenarios to be independent and self-contained, avoiding dependencies on the execution order
Reset or clean up any necessary state or data between scenarios to maintain independence. A heuristic that can be used to determine data clean is “can I run my scenario over and over again?”. If the answer to the question is “No”, then clean up should be performed.
Each scenario should create their own data to use, unless declared in
Background
section of the feature file.
Use Table-Driven Approach Instead of Scenario Outline
Use tables instead of Scenario Outline Examples to be able to utilize reusable functions, improve code readability, and allow for parallel execution of independent Scenarios.
Tables offer more flexibility in terms data preparation, as they can be reused within the Scenario and across reusable functions much more easily.
Avoid parallel=false
Indicator In The Features
The usage of
parallel=false
indicator at the start of the test feature forces the Scenarios to be executed in a single-threaded fashion. It is recommended to run Scenarios in parallel by removingparallel=false
and by implicitly specifying the number of threads in the caller JUnit method.
To be able to run Scenarios in parallel safely make sure that the test workflow supports idempotency, uses unique UUIDs, IDs and code. In addition to that, multiple related Scenarios should be combined into one or several independent Scenarios, with emphasis on reusable functions.
Add Commentary On Scenarios
Use comments to document any assumptions, limitations, or known issues related to the scenario.
Keep comments concise and relevant, avoiding unnecessary or redundant information.
Add numerated print
statements within a Scenario
Add numerated
print
statements to document the test workflow for each Given-When-Then block or for a significant stage within a Scenario (e.g. calling a reusable function).Keep them brief and outlining the general purpose of the code execution.
Numerated
print
statements improve test readability and allow the readers to follow the given workflow more easily while debugging.
Prioritize Simplicity & Clarity, Favor Readability Over Complexity
Write scenarios in a clear and concise manner, focusing on the essential steps and assertions.
Avoid complex or nested logic within scenarios. Opt to utilize java functions for heavy coding scenarios. The scenarios should read like a technical product owner wrote it.
Use built-in Karate functions and utilities whenever possible to keep the code clean and maintainable.
Employ Reusable Functions As Much as Possible
Define reusable functions in a separate file or directory and import them into the feature files as needed.
Use descriptive names for reusable functions to convey their purpose and functionality.
Document the input parameters and return values of reusable functions using comments.
Add
@ignore
indicator to avoid the feature being considered as a standalone feature.
The use of reusable functions allows for test features with less code duplication and improved readability.
Nearly all function calls must be prepended with
def v = call ...
, particularly if the output of the function is not required in further assertion or data preparation. In some cases, such as global variable initialization, user/admin login and others, the output must not be ignored, hencedef v = call ...
prefix should not be used.
Avoid Very Large Feature Files
Keep feature files focused and concise, avoiding the creation of very large files that encompass too many scenarios.
Split large feature files into smaller, more manageable files based on logical groupings or related functionality.
Use the
call
keyword to invoke reusable scenarios or functions from other feature files when necessary.
Externalize Huge JSON Objects
If a scenario requires large or complex JSON objects, consider externalizing them into separate files.
Use the
read()
function to load the JSON objects from external files and reference them within the scenario.Keep the external JSON files organized and properly named to maintain clarity and maintainability.
Curb Use of Global Variables
Minimize the use of global variables and prefer local variables within scenarios whenever possible.
Document the purpose and usage of global variables using comments.
Do Not Repurpose the Same Identifiers
Each scenario should have its own unique set of data identifiers to ensure independence and avoid potential conflicts or side effects.
If scenarios rely on pre-existing data, consider creating separate test data sets for each scenario or using dynamic data generation techniques to ensure uniqueness.
When creating test data or referring to specific data identifiers (e.g., order IDs, user IDs & other UUIDs) in your scenarios, avoid using the same identifiers across multiple scenarios to maintain test integrity.
Asserting Objects/Collections In a Response
When asserting a response that has a possibility of delay, add a
retry
rather than "pausing" the test executionUtilize the
contains
assertion to check if a collection contains specific elements or values.Combine assertions with the
match
keyword to perform deep equality checks on collections and allow flexibility for objects when additional properties are added.Use the
contains any
orcontains only
and other assertions like these to check for the presence or absence of specific elements in a collection. Never assume that the order of objects in a collection is going to be guaranteed.
Naming Conventions
Use descriptive and meaningful names for feature files, scenarios and variables.
Use snake_case convention throughout the test suite.
Use uppercase with underscores for constant names to differentiate them from variables.