| Dependency Resolution | | DEPENDENCIES - You don't know what the backend dependencies are going to be until you ask Okapi. Since there is not integration with Kubernetes, you can't ask Okapi to deploy it. So there is a ot of back and forth. Okapi knows everything it needs but you need to know what Okapi version you need. Assumption is I need to run latest copy of O that I can so I can build the backend.
- How would we like it to work: there are Jiras revolving around dependencies and kubernetes.
- One issue about wildcard DNS: ask Okapi to resolve dependencies. During process Okapi has to pull in all the module descriptors. Would be nice to only pull in the released versions. But now that is fixed. Jason would like it to look a. get in place the filter for module discovery and b) Okapi can spit out what you need, then spit out a YAML manifest. . Version it and open up a world of ease of deployment Okapi will spit out a JSON file. Would have to be Yaml and kubernetes speak. Could post to Kub API. Add not only the JSON but the YAML and define that service as a static file.
- If you didn't want to register every version of every module with your version of OKAPI, the YAML would say so. Jakub wll comment on the FOLIO-1931.
- It's going to become a requirement to have the info to deploy a specific feature - you will be able to do it dynamically. You could just wait for the next quarterly release. But some people will want to do continuous deployment. What backend dependencies will that require.
- Hosting partners should be able to release new features without waiting for a giant release.
- Leipzig - they have kubernetes cluster - colleague is leaving so they are getting into it. They are managing dependency resolution by hand.
- Module registration should be possible by pulling module descriptors from remote . Could configure Okapi to resolve the ID of the module to a workload. via the URL. There is a ticket for that.
- Right now you have to go through a rigamarole to get to Okapi - have to have VM with Okapi or have it in a docker container. Could there be some sort of interface for that than command line, Okapi CLI? Need to rely on a lot in-house knowledge. App store? Somewhere in between package manager and app store.
- what if I want to stand up okapi to resolve dependencies - but what do I need to run to set up Okapi and secure it to ask it.
- Is it possible to use several registries? It is but it has to be a released module. Module descriptor registry - jason uses Index data's as well as is own that does a chron job to pull in the index data one. They are synced so he can have access even if downtime.
- Reference data is the data you need in order for the module to run and make sense. You need format types to run inventory, .eg. Sample data is just exemplar data. There aren't dependencies cross-module for reference data. You can stand it up without reference data and then load it after.
- bootstrapping the superuser is solved. You can bootstrap a super-user using the APIs but it's not automated. Create a user and disable it . Jason ran into instance with secured Okapi and make a new tenant - needed new superuser for new tenant even though it was already secure. Wayne updated the perl script. No big deal for secured tenant. We have had issue to create module that uses module permissions - extended permissions. Good to have some more or less sanctioned way - there are a few scripts for that extant right now. Ticket to move to stripes CLI
UPGRADES AND UPDATES: - people want to do upgrade in real time - will it be blue green. e.g. mod users-bl which has no storage. Jason : two routes: If you are running more than one tenant - you would have to set up a whole new tenant if only wanted to update one. Rolling upgrades set up in Kubernetes - keeps the old version, spins up the new version, then takes up the old version down. Then he has to curl to OKAPI to run the new modules. There is a ticket for the service discovery filter but it's module-version specific.
- In the user case for CI/CD, we will do specific module use case. The project will need all the tenants to run while they are supporting the version. This is blue green deployment rather than rolling. In kubernetes there is a way to roll back and keep as many states as you want. Action item: capture the process.
- Upgrade - update strategy. A simple case is upgrade without a data update If No schema or storage level changes - can do it today. Can roll it out in congruence with your existing data. Still manual process. Need at least a set of guides with a recommended process.
- Update - where you have to change the data. The ERM module does some stuff on their own. They have kubernetes api and some migration toolkit. RMB's storage mechanism is schemaless. If you use a database with JSON you don't have schema - you do have tables views and indexes But the json schema is in the API. One thing we could so is to ask dev teams to implement table level migrations. But won't help changes to schema. Because data won't validate. But new required field - could just supply some data and then go back and update the data.
- Still a big challenge - how tolerant of downtime can we be? For the next year - there will still be breaking API changes. Two sides: downtime I have the tool; not so bad. But if there isn't a tool there is the gap.
- Modules need to do data migration on their own, and tooling needs to be developed.
|