kube-cascade/keycloak
2023-07-19 17:15:19 -05:00
..
admin-user.yaml add keycloak 2023-04-27 21:31:50 -05:00
copy-admin-password.sh add keycloak 2023-04-27 21:31:50 -05:00
db.yaml add keycloak 2023-04-27 21:31:50 -05:00
ingress.yaml keycloak uses istio now 2023-07-19 16:59:38 -05:00
keycloak-sts.yaml change keycloak edge type to none 2023-07-19 17:15:19 -05:00
keycloak-svc.yaml add keycloak 2023-04-27 21:31:50 -05:00
README.md add keycloak 2023-04-27 21:31:50 -05:00

I recently broke this by deleting the database and restarting it.

This was actually an accident.

Anyway, I had a backup but went ahead and rolled forward to test the terraform-a-new-keycloak idea... and it worked, I think!

So I had a blank keycloak sitting in kubernetes based on the manifests here.

I then moved the tfstate away and reterraformed the oidc clients and ldap configs back into existence.

HOWEVER: the oidc secrets will be different. To unscrew this up, the secrets must be restored. This is most easily done by restoring the random password from the previous state.

First, we'll delete the existing new secrets. Then we'll restore the others.

jq -c '.resources[]' terraform.tfstate.1681525339.backup | \
  jq -r '
    select(.type == "random_password")
    | @sh "terraform state rm \(.module).\(.type).\(.name)\"[0]\""
  ' | sh -s

I screwed my system up on Friday, April 14 at 21:22:19 CDT in the year 2023.

Now we'll restore the good secrets.

jq -c '.resources[]' terraform.tfstate.1681525339.backup | \
  jq -r '
    select(.type == "random_password")
    | @sh "terraform import \(.module).\(.type).\(.name)\"[0]\" \(.instances[0].attributes.result)"
  ' | sh -s

At least, I think this worked... I also had to set the epoch to 1 for all of these (by modifying the state file by hand).