Enable Oolite development
Posted: Sun Apr 23, 2023 8:46 pm
It seems to me that Oolite development has stalled. Here is a code change that took 474 days from commit to a pre-release.
It is absolutely understandable that development stalls if there are almost no developers, no automatic build environment and the supported platforms outnumber the ability of single persons to really test their changes. As stated here, some changes were just lucky guesses. And with over time libraries have to be exchanged, separately for every platform and thoroughly tested. How should an individual master all this? But without ongoing development, without a minimum maintenance the code will at some point in time no longer compile well (this point has already come for the MacOS platform).
Countermeasures need to lower the hurdle of changing code without knowing the results. There are several steps involved. One of them was to introduce automatic builds. So today, whenever we merge/push something into the master branch we automatically get Oolite built both for Linux and Windows. But there is more...
a) Why do we wait for an automatic build until we merge on master? Isn't it worth to see the build from a feature branch before deciding to merge?
b) Doing more builds requires also that we have good names for them. Tagging the builds may become essential, and we need to agree on a method for that.
c) More builds will create more artefacts. Can GitHub host them all, or will we run into a quota? How can we keep the number of builds at a reasonable level?
d) Automatic builds just provide code, and it would still take users to download, test and feedback if everything is good. Everything? Keep in mind the size of the code, amount of features and thus complexity of testing. So we need to think about automatic testing. Unit tests would be a suitable approach: With them, every developer would see the impact of his/her change at the earliest.
e) How would we check whether unit tests have been implemented? Or whether they need to be updated to include new features that get added in the future? For this there exist tools to measure test coverage or other issues with the code.
It seems to me before we go for a) we have to have a solution for b) and c).
Similarly, it may be easier to first resolve e) before we attack d).
What are your opinions on it?
It is absolutely understandable that development stalls if there are almost no developers, no automatic build environment and the supported platforms outnumber the ability of single persons to really test their changes. As stated here, some changes were just lucky guesses. And with over time libraries have to be exchanged, separately for every platform and thoroughly tested. How should an individual master all this? But without ongoing development, without a minimum maintenance the code will at some point in time no longer compile well (this point has already come for the MacOS platform).
Countermeasures need to lower the hurdle of changing code without knowing the results. There are several steps involved. One of them was to introduce automatic builds. So today, whenever we merge/push something into the master branch we automatically get Oolite built both for Linux and Windows. But there is more...
a) Why do we wait for an automatic build until we merge on master? Isn't it worth to see the build from a feature branch before deciding to merge?
b) Doing more builds requires also that we have good names for them. Tagging the builds may become essential, and we need to agree on a method for that.
c) More builds will create more artefacts. Can GitHub host them all, or will we run into a quota? How can we keep the number of builds at a reasonable level?
d) Automatic builds just provide code, and it would still take users to download, test and feedback if everything is good. Everything? Keep in mind the size of the code, amount of features and thus complexity of testing. So we need to think about automatic testing. Unit tests would be a suitable approach: With them, every developer would see the impact of his/her change at the earliest.
e) How would we check whether unit tests have been implemented? Or whether they need to be updated to include new features that get added in the future? For this there exist tools to measure test coverage or other issues with the code.
It seems to me before we go for a) we have to have a solution for b) and c).
Similarly, it may be easier to first resolve e) before we attack d).
What are your opinions on it?