This is a docker setup, so to update all I did was change the lemmy-ui and lemmy versions in docker-compose.yml. Note: downgrading to 0.17.4 results in an API error instead, and still a broken site, so downgrading does not appear to be an option.

Upgraded my instance to 0.18.0, and now there are errors in both lemmy-ui and lemmy backend.

I see federation messages processing as usual, however trying to load the UI generates a weird error in lemmy-ui, and returns “Server Error” instead of the main page.

The error in the lemmy-ui logs looks like it is trying to load the site icon via pictrs from the public facing domain, but instead trying to connect to 127.0.1.1:443 (for pictrs) and getting refused.

lemmy-ui log

FetchError: request to https://SITE_URL_REDACTED/pictrs/image/a29da3fc-b6ce-4e59-82b0-1a9c94f8faed.webp failed, reason: connect ECONNREFUSED 127.0.1.1:443
    at ClientRequest.<anonymous> (/app/node_modules/node-fetch/lib/index.js:1505:11)
    at ClientRequest.emit (node:events:511:28)
    at TLSSocket.socketErrorListener (node:_http_client:495:9)
    at TLSSocket.emit (node:events:511:28)
    at emitErrorNT (node:internal/streams/destroy:151:8)
    at emitErrorCloseNT (node:internal/streams/destroy:116:3)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
  type: 'system',
  errno: 'ECONNREFUSED',
  code: 'ECONNREFUSED'
}

lemmy-ui and pictrs are on the same default lemmyinternal network.

lemmy log errors

2023-06-23T21:10:03.153597Z  WARN Error encountered while processing the incoming HTTP request: lemmy_server::root_span_builder: data did not match any variant of untagged enum AnnouncableActivities
   0: lemmy_apub::activities::community::announce::receive
             at crates/apub/src/activities/community/announce.rs:46
   1: lemmy_server::root_span_builder::HTTP request
           with http.method=POST http.scheme="http" http.host=hakbox.social http.target=/inbox otel.kind="server" request_id=35c58bff-dc83-40f7-b7f0-d885072958ab http.status_code=400 otel.status_code="OK"
             at src/root_span_builder.rs:16
LemmyError { message: None, inner: data did not match any variant of untagged enum AnnouncableActivities, context: SpanTrace [{ target: "lemmy_apub::activities::community::announce", name: "receive", file: "crates/apub/src/activities/community/announce.rs", line: 46 }, { target: "lemmy_server::root_span_builder", name: "HTTP request", fields: "\u{1b}[3mhttp.method\u{1b}[0m\u{1b}[2m=\u{1b}[0mPOST \u{1b}[3mhttp.scheme\u{1b}[0m\u{1b}[2m=\u{1b}[0m\"http\" \u{1b}[3mhttp.host\u{1b}[0m\u{1b}[2m=\u{1b}[0mhakbox.social \u{1b}[3mhttp.target\u{1b}[0m\u{1b}[2m=\u{1b}[0m/inbox \u{1b}[3motel.kind\u{1b}[0m\u{1b}[2m=\u{1b}[0m\"server\" \u{1b}[3mrequest_id\u{1b}[0m\u{1b}[2m=\u{1b}[0m35c58bff-dc83-40f7-b7f0-d885072958ab \u{1b}[3mhttp.status_code\u{1b}[0m\u{1b}[2m=\u{1b}[0m400 \u{1b}[3motel.status_code\u{1b}[0m\u{1b}[2m=\u{1b}[0m\"OK\"", file: "src/root_span_builder.rs", line: 16 }] }
2023-06-23T21:09:14.740187Z  WARN Error encountered while processing the incoming HTTP request: lemmy_server::root_span_builder: Other errors which are not explicitly handled
   0: lemmy_server::root_span_builder::HTTP request
           with http.method=POST http.scheme="http" http.host=SITE_URL_REDACTED http.target=/inbox otel.kind="server" request_id=83feb464-5402-4d88-b98a-98bc0a76913d http.status_code=400 otel.status_code="OK"
             at src/root_span_builder.rs:16
LemmyError { message: None, inner: Other errors which are not explicitly handled

Caused by:
    Http Signature is expired, checked Date header, checked at Fri, 23 Jun 2023 21:09:14 GMT, expired at Fri, 23 Jun 2023 21:08:14 GMT, context: SpanTrace [{ target: "lemmy_server::root_span_builder", name: "HTTP request", fields: "\u{1b}[3mhttp.method\u{1b}[0m\u{1b}[2m=\u{1b}[0mPOST \u{1b}[3mhttp.scheme\u{1b}[0m\u{1b}[2m=\u{1b}[0m\"http\" \u{1b}[3mhttp.host\u{1b}[0m\u{1b}[2m=\u{1b}[0mSITE_URL_REDACTED \u{1b}[3mhttp.target\u{1b}[0m\u{1b}[2m=\u{1b}[0m/inbox \u{1b}[3motel.kind\u{1b}[0m\u{1b}[2m=\u{1b}[0m\"server\" \u{1b}[3mrequest_id\u{1b}[0m\u{1b}[2m=\u{1b}[0m83feb464-5402-4d88-b98a-98bc0a76913d \u{1b}[3mhttp.status_code\u{1b}[0m\u{1b}[2m=\u{1b}[0m400 \u{1b}[3motel.status_code\u{1b}[0m\u{1b}[2m=\u{1b}[0m\"OK\"", file: "src/root_span_builder.rs", line: 16 }] }

I’ve also filed a bug, because I’ve been trying to troubleshoot this, but haven’t found a solution yet.

Any help is appreciated.

  • Slashzero@lemmy.worldOP
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    2 years ago

    Note: this seems like it has something to do with the database, and something getting royally messed up post upgrade.

    After trying all sorts of network hacks and updates, I eventually just decided to backup my Postgres container, and nuke it.

    With a fresh Postgres DB running along with 0.18.0, my self hosted site is back online. Of course, my local post history and all my subs are gone… but at least my site is operational again.

    I’d advise anyone self-hosting to not upgrade to 0.18.0 yet.

  • gnzl@nc.gnzl.cl
    link
    fedilink
    arrow-up
    1
    ·
    2 years ago

    Have you taken a look at the docker-compose configuration from the lemmy-ansible deployment? A proxy container was added to the configuration for the upgrade to 0.18.0 and it may be necessary now.

    • Slashzero@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      I also added the proxy network to both lemmy-ui and pictrs, but that did not resolve the issue.

      I had to manually remove the sites icon from the database for the site to start working again.

      I’m going to try removing the proxy network and instead use pictrs:8080 on the internal network as that feels more secure.

  • Slashzero@hakbox.social
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 years ago

    I ended up nuking my Postgres DB, and recreating that container from scratch. Not sure what went wrong, but it feels like some data in the DB when upgrading from 0.17.4 to 0.18.0 perhaps broke stuff? 🤷‍♂️

  • jason@lemm.ee
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    2 years ago

    For me, I have to do ‘url: “http://pictrs:8080/” ‘ instead of 127.0.0.1 in the config.hjson file

    I also had to remove the tls line in the email section to get mine to work. I think this release is a lot more finicky regarding the config file, but it’s little idiosyncrasies don’t appear to be documented.

    That being said, Lemmy is still kind of broken because I can’t even see this thread on my own instance and only see one comment on this one…

    • Slashzero@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      And yes… I see sync issues as well. I’ve responded to some comments multiple times because I didn’t see my original response at a later time.

    • Slashzero@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      2 years ago

      For me, I have to do ‘url: “http://pictrs:8080/” ‘ instead of 127.0.0.1 in the config.hjson file

      I checked this, and my lemmy.hconf file already has the host for pictrs set to http://pictrs:8080.

      The only thing that has worked so far is manually unsetting my site’s image icon by unsetting it directly int he Database.

      • Marduk@hammerdown.0fucks.nl
        link
        fedilink
        arrow-up
        1
        ·
        2 years ago

        After 3 failed ansible upgrade attempts yesterday that caused the lemmy-ui container to continually restart with error: “Input buffer contains unsupported image format”, each time reverting to prior snapshot, I tried it again awhile ago but this time removed the site icon before the upgrade.

        The lemmy-ui container is no longer restarting itself, and I am getting new posts/comments, but there are a large majority of posts that don’t show attached images, and also a large portion of user icons are not showing.

        Anyone figure out the missing images issue?