Restarting supabase-auth & supabase-pooler in Digital Ocean local AI

Hi! I’ve recently installed local AI package following the tutorial. The problem is that supabase-auth & supabase-pooler appears “Restarting”.

supabase.auth log:

goroutine 1 [running]:
net/url.(*URL).Query(0xc00004c017?)
        /usr/local/go/src/net/url/url.go:1159 +0xe
github.com/supabase/auth/cmd.migrate(0xc000023b80?, {0x0?, 0x0?, 0x0?})
        /go/src/github.com/supabase/auth/cmd/migrate_cmd.go:58 +0x227
github.com/supabase/auth/cmd.init.func3(0x1cd4a00, {0x1d12aa0?, 0x4?, 0x127c88b?})
        /go/src/github.com/supabase/auth/cmd/root_cmd.go:20 +0x1d
github.com/spf13/cobra.(*Command).execute(0x1cd4a00, {0xc00003e0b0, 0x0, 0x0})
        /go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:944 +0x847
github.com/spf13/cobra.(*Command).ExecuteC(0x1cd4a00)
        /go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5
github.com/spf13/cobra.(*Command).Execute(...)
        /go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(0x14672c0?, {0x1467480?, 0xc0001ff440?})
        /go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985 +0x47
main.main()
        /go/src/github.com/supabase/auth/main.go:36 +0x11f
{"level":"info","msg":"Go runtime metrics collection started","time":"2025-04-14T21:27:27Z"}
{"level":"info","msg":"received graceful shutdown signal","time":"2025-04-14T21:27:27Z"}
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x60 pc=0x62ab8e]

supabase-pooler log:

Setting RLIMIT_NOFILE to 100000
21:01:32.371 [info] Migrations already up
21:01:32.903 [notice]     :alarm_handler: {:set, {:system_memory_high_watermark, []}}
21:01:32.956 region=local [info] Elixir.Supavisor.SignalHandler is being initialized...
21:01:32.976 region=local [notice] Proxy started transaction on port 6543
21:01:32.980 region=local [notice] Proxy started session on port 5432
21:01:32.986 region=local [notice] Proxy started proxy on port 5412
21:01:32.989 region=local [notice] SYN[nonode@nohost] Adding node to scope <tenants>
21:01:32.993 region=local [notice] SYN[nonode@nohost] Creating tables for scope <tenants>
21:01:32.998 region=local [notice] SYN[nonode@nohost|registry<tenants>] Discovering the cluster
21:01:33.000 region=local [notice] SYN[nonode@nohost|pg<tenants>] Discovering the cluster
21:01:33.001 region=local [notice] SYN[nonode@nohost] Adding node to scope <availability_zone>
21:01:33.001 region=local [notice] SYN[nonode@nohost] Creating tables for scope <availability_zone>
21:01:33.002 region=local [notice] SYN[nonode@nohost|registry<availability_zone>] Discovering the cluster
21:01:33.002 region=local [notice] SYN[nonode@nohost|pg<availability_zone>] Discovering the cluster
21:01:33.002 region=local [warning] metrics_disabled is false
21:01:33.155 region=local [info] Running SupavisorWeb.Endpoint with cowboy 2.13.0 at 0.0.0.0:4000 (http)
21:01:33.160 region=local [info] Access SupavisorWeb.Endpoint at http://localhost:4000
21:01:33.173 region=local [info] [libcluster:postgres] Connected to Postgres database
21:01:33.453 region=local [info] Starting MetricsCleaner
** (ErlangError) Erlang error: {:badarg, {~c"aead.c", 90}, ~c"Unknown cipher or invalid key size"}:

  * 1st argument: Unknown cipher or invalid key size

    (crypto 5.5.2) crypto.erl:1746: :crypto.crypto_one_time_aead(:aes_256_gcm, "your-encryption-key-32-chars-exactly", <<166, 26, 79, 175, 215, 10, 57, 70, 227, 51, 95, 67, 215, 151, 68, 216>>, "LaContraseñaeSG1e2a3r4LabsLaContraseñaeSG1e2a3r4Labs", "AES256GCM", 16, true)
    (cloak 1.1.4) lib/cloak/ciphers/aes_gcm.ex:47: Cloak.Ciphers.AES.GCM.encrypt/2
    (supavisor 2.4.14) deps/cloak_ecto/lib/cloak_ecto/type.ex:42: Supavisor.Encrypted.Binary.dump/1
    (ecto 3.12.5) lib/ecto/type.ex:1048: Ecto.Type.process_dumpers/3
    (ecto 3.12.5) lib/ecto/repo/schema.ex:1110: Ecto.Repo.Schema.dump_field!/6
    (ecto 3.12.5) lib/ecto/repo/schema.ex:1124: anonymous fn/6 in Ecto.Repo.Schema.dump_fields!/5
    (stdlib 6.2) maps.erl:860: :maps.fold_1/4
    nofile:29: (file)
[os_mon] cpu supervisor port (cpu_sup): Erlang has closed
[os_mon] memory supervisor port (memsup): Erlang has closed

Hi @sergio.said,

not sure whats wrong there with just these errors. Maybe provide the whole log of the auth service. the pool is maybe ok.

you can also try and take a look at my install video and see if you missed something or get a hint that helps you: https://youtu.be/lxNKfV6QGuc

Thanks @leex279 ! I’ll see your video.

This is the error that repeats in the log for spabase-auth:
goroutine 1 [running]:
net/url.(*URL).Query(0xc00004c017?)
/usr/local/go/src/net/url/url.go:1159 +0xe
github.com/supabase/auth/cmd.migrate(0xc0004fdb80?, {0x0?, 0x0?, 0x0?})
/go/src/github.com/supabase/auth/cmd/migrate_cmd.go:58 +0x227
github.com/supabase/auth/cmd.init.func3(0x1cd4a00, {0x1d12aa0?, 0x4?, 0x127c88b?})
/go/src/github.com/supabase/auth/cmd/root_cmd.go:20 +0x1d
github.com/spf13/cobra.(*Command).execute(0x1cd4a00, {0xc00003e0b0, 0x0, 0x0})
/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:944 +0x847
github.com/spf13/cobra.(*Command).ExecuteC(0x1cd4a00)
/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3a5
github.com/spf13/cobra.(*Command).Execute(...)
/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992
github.com/spf13/cobra.(*Command).ExecuteContext(0x14672c0?, {0x1467480?, 0xc0000c3440?})
/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:985 +0x47
main.main()
/go/src/github.com/supabase/auth/main.go:36 +0x11f
{“level”:“info”,“msg”:“Go runtime metrics collection started”,“time”:“2025-04-14T21:55:32Z”}
{“level”:“info”,“msg”:“received graceful shutdown signal”,“time”:“2025-04-14T21:55:32Z”}
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x60 pc=0x62ab8e]

1 Like

make sure your .env file is correctly set up

Which OS are you on? I haven’t seen this error before. But also check the troubleshooting section of the local AI package because I do talk about a couple solutions for when containers are constantly restarting!

Hi Cole! Thanks for reaching out. I’m using the Docker Ubuntu 22.04 specified in the tutorial. Ok. I’ll look into the troubleshooting section.

1 Like

Sounds good! I hope that helps