2024-06-18 Meeting notes

Date

Attendees

Goals

Discussion items

TimeItemWhoNotes



SonarCube is not configured in a way that we want it to be.

Static Code Analysis is not always possible.

The possibility to obatin some kind of standard output. This will generally lead to a YAML document or something.

Code Coverage does not work for or apply to Grails projects.

The Output (of this group) must be a List of Features.

GoLang, Grails, Javascript, ...

React-Tests for Grails - those are not meaningful, they only raise the code coverage figure.

Jeremy: I recall internal debates that we had. They were about why we are doing thhis (setting code analysis standards).

It is not a good idea to move away from automatic testing. It will be more difficult with a manual analysis of code quality.

"Integration"- and "Unit"-Testing has led to misunderstandings. Instead, we should describe the kind of test that we are thinking of.

But I am a friend of automatic testing of the code basis. It doesn't matter if we call it "Unit Test" or "Integration Test".

Ethan: How does github run these tests ?

Many "Title-instance resolution services" (TIRS) (question)

Hybrid Tests. It is similar to a Unit Test. We test that with an HTTP GET Call.

In a particular case this will break the transactions. Client databases can become quite messy when they have been running for 10 years, already. We test the exceptions.

A docker compose container is running. Each test sets up a test client.

Each test file is a clean test client (question). On github-Actions, it takes 5-6 Minutes to test. Jeremy: 45 Minutes.

Ankita: It takes about 20 minutes.

Jeremy: One would need Jacoco or something like that. What do we need Code Coverage figures for ? We want to have an approximation to the actual code coverage. How many percent one is covering depends on the method that one employs.

Refactoring improves the code, but makes the code coverage worse. One is working against the code quality...

Ethan: The limits are arbitrary. We have just chosen some number, "80% are good".

Jeremy: What about the module descriptor ?

Ethan: 90% of the endpoints are CRUD. All dictates that are based on "solid in - solid out" (question) are having edge cases.

Ankita: To really test the functionalities must be also part of the analysis. Can one bring these two (concepts) together, Ethan ?

Ethan: One could create a Test Suite which only does "API Call in, API Call out".

Ankita: Can we categorise the tests which are related to our project ?

Let's bundle all kind of tests to get a number for the code coverage.

What do we need to test ? For the Frontend and for the Backend ?

Jeremy: It sounds bureaucratic, but everyone will like it.

The (development) teams will need to determine the kind of testing methode.

Ethan: If we will once write a Python module, we would have no testing method. We need more confidence in the inherent process.

Jeremy: There is always a balance between economy and reliability. One can pull it in different directions. Accountability is important with reliability. Also, it needs to be reproducible for the Community.

Ethan: It is more difficult to explain something to someone else than to do it oneself.

Jeremy: You (K-Int) are no bad actors. But we need to protect the Community from bad actors.

Next time: Let's create some sample methodology documents.





Action items

  •