[FOLIO-2425] Okapi fails to start in Jenkins folio-perf-platform job Created: 18/Jan/20  Updated: 03/Jun/20  Resolved: 19/Jan/20

Status: Closed
Project: FOLIO
Components: None
Affects versions: None
Fix versions: None

Type: Task Priority: P2
Reporter: John Malconian Assignee: John Malconian
Resolution: Done Votes: 0
Labels: platform-backlog
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original estimate: Not Specified

Issue links:
Relates
relates to OKAPI-798 postgres_user setting no longer recog... Closed
Sprint: CP: sprint 80/81
Development Team: Core: Platform

 Description   

The performance testing job fails when specifying a fixed version of Okapi 2.36.x or higher. The error appears to be a postgresql authentication error. Error message in the comments below.



 Comments   
Comment by John Malconian [ 18/Jan/20 ]

In the perf-test environment, Okapi is deployed as a Docker container. The exact invocation is:

docker run --name okapi -e 'JAVA_OPTIONS=-Dokapiurl=http://10.36.10.63:9130 -Dstorage=postgres -Dpostgres_host=10.36.10.225 -Dpostgres_port=5432 -Dpostgres_user=folio -Dpostgres_password=folioadmin -Dpostgres_database=folio' -p9130:9130 folioorg/okapi:2.36.1 cluster

The error generated in the log is:

exec java -Dokapiurl=http://10.36.10.63:9130 -Dstorage=postgres -Dpostgres_host=10.36.10.225 -Dpostgres_port=5432 -Dpostgres_user=folio -Dpostgres_password=folioadmin -Dpostgres_database=folio -javaagent:/opt/agent-bond/agent-bond.jar=jolokia{{host=0.0.0.0}},jmx_exporter{{9779:/opt/agent-bond/jmx_exporter_config.yml}} -cp . -jar /usr/verticles/okapi-core-fat.jar cluster
I> No access restrictor found, access to any MBean is allowed
Jolokia: Agent started with URL http://172.17.0.2:8778/jolokia/
2020-01-18 14:18:55.769:INFO:ifasjipjsoejs.Server:jetty-8.y.z-SNAPSHOT
2020-01-18 14:18:55.801:INFO:ifasjipjsoejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:9779
14:18:56 INFO  Messages             Loading messages from /infra-messages/Messages_en.properties
14:18:56 WARN  MainDeploy           clusterHost not set
14:18:56 WARN  MainDeploy           clusterPort not set
14:18:57 INFO  jFactory$Log4jLogger [LOCAL] [dev] [3.12] Prefer IPv4 stack is true, prefer IPv6 addresses is false
14:18:57 INFO  jFactory$Log4jLogger [LOCAL] [dev] [3.12] Picked [172.17.0.2]:5701, using socket ServerSocket[addr=/0.0.0.0,localport=5701], bind any local is true
14:18:57 INFO  jFactory$Log4jLogger [172.17.0.2]:5701 [dev] [3.12] Hazelcast 3.12 (20190409 - 915d83a) starting at [172.17.0.2]:5701
14:18:57 INFO  jFactory$Log4jLogger [172.17.0.2]:5701 [dev] [3.12] Copyright (c) 2008-2019, Hazelcast, Inc. All Rights Reserved.
14:18:57 INFO  jFactory$Log4jLogger [172.17.0.2]:5701 [dev] [3.12] Backpressure is disabled
14:18:57 INFO  jFactory$Log4jLogger [172.17.0.2]:5701 [dev] [3.12] Creating MulticastJoiner
14:18:58 INFO  jFactory$Log4jLogger [172.17.0.2]:5701 [dev] [3.12] Starting 2 partition threads and 3 generic threads (1 dedicated for priority tasks)
14:18:58 INFO  jFactory$Log4jLogger [172.17.0.2]:5701 [dev] [3.12] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
14:18:58 INFO  jFactory$Log4jLogger [172.17.0.2]:5701 [dev] [3.12] [172.17.0.2]:5701 is STARTING
14:19:00 INFO  jFactory$Log4jLogger [172.17.0.2]:5701 [dev] [3.12] 

Members {size:1, ver:1} [
	Member [172.17.0.2]:5701 - 04f5163f-05d3-493a-a7ff-62dab25a07c0 this
]

14:19:00 INFO  jFactory$Log4jLogger [172.17.0.2]:5701 [dev] [3.12] [172.17.0.2]:5701 is STARTED
14:19:00 INFO  jFactory$Log4jLogger [172.17.0.2]:5701 [dev] [3.12] Initializing cluster partition table arrangement...
14:19:00 INFO  oduleVersionReporter Module okapi-core 2.36.1 started
14:19:00 INFO  oduleVersionReporter git: https://github.com/folio-org/okapi.git 352d031f880b9bdf56f91c1cfd502db5fd344e57
14:19:00 INFO  MainVerticle         cluster NodeId 04f5163f-05d3-493a-a7ff-62dab25a07c0
14:19:00 INFO  BaseSQLClient        Creating configuration for 10.36.10.225:5432
14:19:01 INFO  MainVerticle         Proxy using postgres storage
14:19:01 INFO  InternalModule       InternalModule starting okapiversion=2.36.1
14:19:01 INFO  MainVerticle         Checking for working distributed lock. Cluster=true
14:19:01 INFO  MainVerticle         Distributed lock ok
14:19:01 INFO  Storage              prepareDatabases: NORMAL
14:19:01 INFO  NettyUtils           jasync available transport - native (epoll)
14:19:01 ERROR PostgreSQLConnection Error , message -> ErrorMessage(fields=[(Severity, FATAL), (V, FATAL), (SQLSTATE, 28P01), (Message, password authentication failed for user "okapi"), (File, auth.c), (Line, 329), (Routine, auth_failed)])
14:19:01 ERROR PostgreSQLConnection Error on connection
com.github.jasync.sql.db.postgresql.exceptions.GenericDatabaseException: ErrorMessage(fields=[(Severity, FATAL), (V, FATAL), (SQLSTATE, 28P01), (Message, password authentication failed for user "okapi"), (File, auth.c), (Line, 329), (Routine, auth_failed)])
	at com.github.jasync.sql.db.postgresql.PostgreSQLConnection.onError(PostgreSQLConnection.kt:229) [okapi-core-fat.jar:?]
	at com.github.jasync.sql.db.postgresql.codec.PostgreSQLConnectionHandler.channelRead0(PostgreSQLConnectionHandler.kt:199) [okapi-core-fat.jar:?]
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [okapi-core-fat.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [okapi-core-fat.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [okapi-core-fat.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [okapi-core-fat.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:328) [okapi-core-fat.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:302) [okapi-core-fat.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [okapi-core-fat.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [okapi-core-fat.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [okapi-core-fat.jar:?]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422) [okapi-core-fat.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [okapi-core-fat.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [okapi-core-fat.jar:?]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931) [okapi-core-fat.jar:?]
	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:797) [okapi-core-fat.jar:?]
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:404) [okapi-core-fat.jar:?]
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304) [okapi-core-fat.jar:?]
	at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044) [okapi-core-fat.jar:?]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [okapi-core-fat.jar:?]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [okapi-core-fat.jar:?]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
14:19:01 INFO  AsyncConnectionPool  failed to create connection
com.github.jasync.sql.db.postgresql.exceptions.GenericDatabaseException: ErrorMessage(fields=[(Severity, FATAL), (V, FATAL), (SQLSTATE, 28P01), (Message, password authentication failed for user "okapi"), (File, auth.c), (Line, 329), (Routine, auth_failed)])
	at com.github.jasync.sql.db.postgresql.PostgreSQLConnection.onError(PostgreSQLConnection.kt:229) ~[okapi-core-fat.jar:?]
	at com.github.jasync.sql.db.postgresql.codec.PostgreSQLConnectionHandler.channelRead0(PostgreSQLConnectionHandler.kt:199) ~[okapi-core-fat.jar:?]
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) ~[okapi-core-fat.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[okapi-core-fat.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[okapi-core-fat.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) ~[okapi-core-fat.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:328) ~[okapi-core-fat.jar:?]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:302) ~[okapi-core-fat.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[okapi-core-fat.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[okapi-core-fat.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) ~[okapi-core-fat.jar:?]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422) ~[okapi-core-fat.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[okapi-core-fat.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[okapi-core-fat.jar:?]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931) ~[okapi-core-fat.jar:?]
	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:797) ~[okapi-core-fat.jar:?]
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:404) ~[okapi-core-fat.jar:?]
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304) [okapi-core-fat.jar:?]
	at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044) [okapi-core-fat.jar:?]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [okapi-core-fat.jar:?]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [okapi-core-fat.jar:?]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
14:19:01 FATAL PostgresQuery        getCon failed ErrorMessage(fields=[(Severity, FATAL), (V, FATAL), (SQLSTATE, 28P01), (Message, password authentication failed for user "okapi"), (File, auth.c), (Line, 329), (Routine, auth_failed)])
14:19:01 ERROR MainVerticle         ErrorMessage(fields=[(Severity, FATAL), (V, FATAL), (SQLSTATE, 28P01), (Message, password authentication failed for user "okapi"), (File, auth.c), (Line, 329), (Routine, auth_failed)])
14:19:01 ERROR MainCluster          ErrorMessage(fields=[(Severity, FATAL), (V, FATAL), (SQLSTATE, 28P01), (Message, password authentication failed for user "okapi"), (File, auth.c), (Line, 329), (Routine, auth_failed)])

I don't quite understand why okapi is using the user 'okapi' to authenticate to the database. Possible DB initialization changes in 2.36.x?

Comment by John Malconian [ 18/Jan/20 ]

Setting '-Dpostgres_user=folio' does not set the Okapi postgresql DB user to 'folio'. Starting in Okapi 2.36.0, it appears that the db user is hardcoded to 'okapi'. If I set the db user to 'okapi', then all is fine.

docker run -d --rm --name foliodb -e POSTGRES_USER=okapi -e POSTGRES_PASSWORD=folioadmin -p5432:5432 postgres:10

docker run -it --rm --name okapi -e 'JAVA_OPTIONS=-Dokapiurl=http://10.36.10.63:9130 -Dstorage=postgres -Dpostgres_host=10.36.20.124 -Dpostgres_port=5432 -Dpostgres_user=okapi -Dpostgres_password=folioadmin -Dpostgres_database=okapi' -p9130:9130 folioorg/okapi:2.36.1 cluster

exec java -Dokapiurl=http://10.36.10.63:9130 -Dstorage=postgres -Dpostgres_host=10.36.20.124 -Dpostgres_port=5432 -Dpostgres_user=okapi -Dpostgres_password=folioadmin -Dpostgres_database=okapi -javaagent:/opt/agent-bond/agent-bond.jar=jolokia{{host=0.0.0.0}},jmx_exporter{{9779:/opt/agent-bond/jmx_exporter_config.yml}} -cp . -jar /usr/verticles/okapi-core-fat.jar cluster
I> No access restrictor found, access to any MBean is allowed
Jolokia: Agent started with URL http://172.17.0.3:8778/jolokia/
2020-01-18 15:29:04.516:INFO:ifasjipjsoejs.Server:jetty-8.y.z-SNAPSHOT
2020-01-18 15:29:04.558:INFO:ifasjipjsoejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:9779
15:29:05 INFO  Messages             Loading messages from /infra-messages/Messages_en.properties
15:29:05 WARN  MainDeploy           clusterHost not set
15:29:05 WARN  MainDeploy           clusterPort not set
15:29:05 INFO  jFactory$Log4jLogger [LOCAL] [dev] [3.12] Prefer IPv4 stack is true, prefer IPv6 addresses is false
15:29:05 INFO  jFactory$Log4jLogger [LOCAL] [dev] [3.12] Picked [172.17.0.3]:5701, using socket ServerSocket[addr=/0.0.0.0,localport=5701], bind any local is true
15:29:05 INFO  jFactory$Log4jLogger [172.17.0.3]:5701 [dev] [3.12] Hazelcast 3.12 (20190409 - 915d83a) starting at [172.17.0.3]:5701
15:29:05 INFO  jFactory$Log4jLogger [172.17.0.3]:5701 [dev] [3.12] Copyright (c) 2008-2019, Hazelcast, Inc. All Rights Reserved.
15:29:06 INFO  jFactory$Log4jLogger [172.17.0.3]:5701 [dev] [3.12] Backpressure is disabled
15:29:06 INFO  jFactory$Log4jLogger [172.17.0.3]:5701 [dev] [3.12] Creating MulticastJoiner
15:29:07 INFO  jFactory$Log4jLogger [172.17.0.3]:5701 [dev] [3.12] Starting 4 partition threads and 3 generic threads (1 dedicated for priority tasks)
15:29:07 INFO  jFactory$Log4jLogger [172.17.0.3]:5701 [dev] [3.12] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
15:29:07 INFO  jFactory$Log4jLogger [172.17.0.3]:5701 [dev] [3.12] [172.17.0.3]:5701 is STARTING
15:29:09 INFO  jFactory$Log4jLogger [172.17.0.3]:5701 [dev] [3.12] 

Members {size:1, ver:1} [
	Member [172.17.0.3]:5701 - dc5f34b9-a413-4e60-b6e8-a0d83759d35e this
]

15:29:09 INFO  jFactory$Log4jLogger [172.17.0.3]:5701 [dev] [3.12] [172.17.0.3]:5701 is STARTED
15:29:10 INFO  jFactory$Log4jLogger [172.17.0.3]:5701 [dev] [3.12] Initializing cluster partition table arrangement...
15:29:10 INFO  oduleVersionReporter Module okapi-core 2.36.1 started
15:29:10 INFO  oduleVersionReporter git: https://github.com/folio-org/okapi.git 352d031f880b9bdf56f91c1cfd502db5fd344e57
15:29:10 INFO  MainVerticle         cluster NodeId dc5f34b9-a413-4e60-b6e8-a0d83759d35e
15:29:10 INFO  BaseSQLClient        Creating configuration for 10.36.20.124:5432
15:29:10 INFO  MainVerticle         Proxy using postgres storage
15:29:10 INFO  InternalModule       InternalModule starting okapiversion=2.36.1
15:29:10 INFO  MainVerticle         Checking for working distributed lock. Cluster=true
15:29:10 INFO  MainVerticle         Distributed lock ok
15:29:10 INFO  Storage              prepareDatabases: NORMAL
15:29:10 INFO  NettyUtils           jasync available transport - native (epoll)
15:29:10 INFO  MainVerticle         startTenants
15:29:10 INFO  MainVerticle         checkInternalModules
15:29:10 INFO  MainVerticle         Creating the superTenant supertenant
15:29:10 INFO  MainVerticle         starting env
15:29:10 INFO  MainVerticle         Starting discovery
15:29:10 INFO  MainVerticle         Starting deployment
15:29:10 INFO  MainVerticle         API Gateway started PID 1@a7225b562e5b. Listening on port 9130
15:29:10 INFO  MainVerticle         Deploy completed succesfully
15:29:10 INFO  TenantManager        starting supertenant
15:29:10 INFO  TenantManager        handleTimer tenant supertenant module null seq1 0
15:29:10 INFO  TenantManager        handleTimer done no 0
Comment by John Malconian [ 18/Jan/20 ]

If I run the same commands but substitute the okapi version to 2.35.2, I can not duplicate the issue:

[malc@ip-10-36-20-124 ~]$ docker run -d --rm --name foliodb -e POSTGRES_USER=folio -e POSTGRES_PASSWORD=folioadmin -p5432:5432 postgres:10
57323d2a5d9d3cc00a627cb030ff6f1774c907227453a465feb96b42f12f0459
[malc@ip-10-36-20-124 ~]$ docker run -it --rm --name okapi -e 'JAVA_OPTIONS=-Dokapiurl=http://10.36.10.63:9130 -Dstorage=postgres -Dpostgres_host=10.36.20.124 -Dpostgres_port=5432 -Dpostgres_user=folio -Dpostgres_password=folioadmin -Dpostgres_database=folio' -p9130:9130 folioorg/okapi:2.35.2 cluster
Unable to find image 'folioorg/okapi:2.35.2' locally
2.35.2: Pulling from folioorg/okapi
a44d943737e8: Already exists 
5ac1bdb189a1: Already exists 
db60dd36ed4d: Already exists 
387b4bfc9a39: Already exists 
8873b518a9f5: Already exists 
b4e2fb000119: Already exists 
f5f0bd8c79a6: Already exists 
68f8f1a09040: Already exists 
2b40c496c7b1: Already exists 
ae7dd20d786d: Pull complete 
Digest: sha256:250a51504df159a4962b76df91fc4f356cd4ad8f6f4c6454cb4fc0af8257cebb
Status: Downloaded newer image for folioorg/okapi:2.35.2
exec java -Dokapiurl=http://10.36.10.63:9130 -Dstorage=postgres -Dpostgres_host=10.36.20.124 -Dpostgres_port=5432 -Dpostgres_user=folio -Dpostgres_password=folioadmin -Dpostgres_database=folio -javaagent:/opt/agent-bond/agent-bond.jar=jolokia{{host=0.0.0.0}},jmx_exporter{{9779:/opt/agent-bond/jmx_exporter_config.yml}} -cp . -jar /usr/verticles/okapi-core-fat.jar cluster
I> No access restrictor found, access to any MBean is allowed
Jolokia: Agent started with URL http://172.17.0.3:8778/jolokia/
2020-01-18 16:12:42.398:INFO:ifasjipjsoejs.Server:jetty-8.y.z-SNAPSHOT
2020-01-18 16:12:42.440:INFO:ifasjipjsoejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:9779
16:12:42 INFO  Messages             Loading messages from /infra-messages/Messages_en.properties ................................
16:12:43 WARN  MainDeploy           clusterHost not set
16:12:43 WARN  MainDeploy           clusterPort not set
16:12:43 INFO  jFactory$Slf4jLogger [LOCAL] [dev] [3.12] Prefer IPv4 stack is true, prefer IPv6 addresses is false
16:12:43 INFO  jFactory$Slf4jLogger [LOCAL] [dev] [3.12] Picked [172.17.0.3]:5701, using socket ServerSocket[addr=/0.0.0.0,localport=5701], bind any local is true
16:12:43 INFO  jFactory$Slf4jLogger [172.17.0.3]:5701 [dev] [3.12] Hazelcast 3.12 (20190409 - 915d83a) starting at [172.17.0.3]:5701
16:12:43 INFO  jFactory$Slf4jLogger [172.17.0.3]:5701 [dev] [3.12] Copyright (c) 2008-2019, Hazelcast, Inc. All Rights Reserved.
16:12:43 INFO  jFactory$Slf4jLogger [172.17.0.3]:5701 [dev] [3.12] Backpressure is disabled
16:12:44 INFO  jFactory$Slf4jLogger [172.17.0.3]:5701 [dev] [3.12] Creating MulticastJoiner
16:12:44 INFO  jFactory$Slf4jLogger [172.17.0.3]:5701 [dev] [3.12] Starting 4 partition threads and 3 generic threads (1 dedicated for priority tasks)
16:12:44 INFO  jFactory$Slf4jLogger [172.17.0.3]:5701 [dev] [3.12] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
16:12:44 INFO  jFactory$Slf4jLogger [172.17.0.3]:5701 [dev] [3.12] [172.17.0.3]:5701 is STARTING
16:12:47 INFO  jFactory$Slf4jLogger [172.17.0.3]:5701 [dev] [3.12] 

Members {size:1, ver:1} [
	Member [172.17.0.3]:5701 - 5f8a3e08-a267-450a-9932-639ffc290189 this
]

16:12:47 INFO  jFactory$Slf4jLogger [172.17.0.3]:5701 [dev] [3.12] [172.17.0.3]:5701 is STARTED
16:12:47 INFO  jFactory$Slf4jLogger [172.17.0.3]:5701 [dev] [3.12] Initializing cluster partition table arrangement...
16:12:47 INFO  oduleVersionReporter Module okapi-core 2.35.2 started
16:12:47 INFO  oduleVersionReporter git: https://github.com/folio-org/okapi.git e7398ff5a0919a21e96cc928262f46b44f7308c1
16:12:47 INFO  MainVerticle         cluster NodeId 5f8a3e08-a267-450a-9932-639ffc290189
16:12:47 INFO  BaseSQLClient        Creating configuration for 10.36.20.124:5432
16:12:48 INFO  MainVerticle         Proxy using postgres storage
16:12:48 WARN  InternalModule       InternalModule starting okapiversion=2.35.2
16:12:48 INFO  MainVerticle         Checking for working distributed lock. Cluster=true
16:12:48 INFO  MainVerticle         Distributed lock ok
16:12:48 INFO  Storage              prepareDatabases: NORMAL
16:12:48 INFO  MainVerticle         startTenants
16:12:48 INFO  MainVerticle         checkInternalModules
16:12:48 INFO  MainVerticle         Creating the superTenant supertenant
16:12:48 INFO  MainVerticle         starting Env
16:12:48 INFO  MainVerticle         Starting discovery
16:12:48 INFO  MainVerticle         Starting deployment
16:12:48 INFO  MainVerticle         API Gateway started PID 1@38d83e4122ec. Listening on port 9130
16:12:48 INFO  MainVerticle         Deploy completed succesfully
16:12:48 INFO  TenantManager        starting supertenant
16:12:48 INFO  TenantManager        handleTimer tenant=supertenant module=null seq1=0
16:12:48 INFO  MainVerticle         fut setHandler
16:12:48 INFO  TenantManager        handleTimer done no=0
Comment by John Malconian [ 18/Jan/20 ]

The issue is that passing 'postgres_user' is ignored in Okapi 2.36.x. Only 'postgres_username' will work.

Generated at Thu Feb 08 23:20:32 UTC 2024 using Jira 1001.0.0-SNAPSHOT#100246-sha1:7a5c50119eb0633d306e14180817ddef5e80c75d.