`roomuserid_joined` cf seems unreliable, so in the mean time we need to check
membership state (or maybe this is a more reliable check anyways)
Signed-off-by: strawberry <strawberry@puppygock.gay>
this `real_users_cache` cache seems weird, and i have no idea what
prompted its creation upstream. perhaps they did this because
sqlite was very slow and their rocksdb setup is very poor, so
a "solution" was to stick member counts in memory.
slow iterators, scanning, etc do not apply to conduwuit where
our rocksdb is extremely tuned, and i seriously doubt something
like this would have any real world net-positive performance impact.
also for some reason, there is suspicious logic where we
overwrite the entire push target collection.
both of these things could be a potential cause for receiving
notifications in rooms we've left.
Signed-off-by: strawberry <strawberry@puppygock.gay>
only return `inline` if the detected content-type is an allowed
inline content-type as defined by MSC2702
Signed-off-by: strawberry <strawberry@puppygock.gay>
so in theory: guest users, peaking over federation,
and world readable rooms should be allowed to send
read receipts even if they're not joined.
relaxing this check to only allow the read receipt if
the server has at least 1 member in the room makes
some of this still work
Signed-off-by: strawberry <strawberry@puppygock.gay>
this is dual-stack by default on linux, resolves
issues with nginx using `localhost` and randomly
choosing between 127.0.0.1 and [::1], causing
intermittent upstream issues
Signed-off-by: strawberry <strawberry@puppygock.gay>
for a very long time, if a remote server responded to us with
a valid but unsuccessful (HTTP 4xx) response and the caller was the
`send_federation_request` function, we may find ourselves
with a warning message only containing the destination's
server name which was very unhelpful. the true error was
buried away in trace logs. this would primarily be noticed
with server key fetch requests from us.
conduit has been throwing away the ruma request error: https://gitlab.com/famedly/conduit/-/blame/next/src/utils/error.rs#L62
before: 2024-05-23T04:45:02.930224Z WARN router:{path=/_matrix/client/v3/publicRooms}:handle: conduit_api::client_server::directory: Failed to return our /publicRooms: matrix.org
after: 2024-05-23T05:05:02.435272Z WARN router:{path=/_matrix/client/v3/publicRooms}:handle: conduit_api::client_server::directory: Failed to return our /publicRooms: matrix.org: [401 / M_UNAUTHORIZED] Failed to find any key to satisfy: _FetchKeyRequest(server_name='your.server.name', minimum_valid_until_ts=1716440702337, key_ids=['ed25519:RQB3XPQX'])
Signed-off-by: strawberry <strawberry@puppygock.gay>
this matrix-react-sdk PR (and the cited sliding sync MSC)
says that they will intend on checking sliding sync support
from this unstable feature flag at /versions until the CORS
header stuff is specced
https://github.com/matrix-org/matrix-react-sdk/pull/12498
Signed-off-by: strawberry <strawberry@puppygock.gay>
Previously, we were returning redundant member count updates or encrypted
device updates from the /sync endpoint in some cases. The extra member
count updates are spec-compliant, but unnecessary, while the extra
encrypted device updates violate the spec.
The refactor necessary to fix this bug is also necessary to support
filtering on state events in sync.
Details:
Joined room incremental sync needs to examine state events for four
purposes:
1. determining whether we need to return an update to room member counts
2. determining the set of left/joined devices for encrypted rooms
(returned in `device_lists`)
3. returning state events to the client (in `rooms.joined.*.state`)
4. tracking which member events we have sent to the client, so they can
be omitted on future requests when lazy-loading is enabled.
The state events that we need to examine for the first two cases is member
events in the delta between `since` and the end of `timeline`. For the
second two cases, we need the delta between `since` and the start of
`timeline`, plus contextual member events for any senders that occur in
`timeline`. The second list is subject to filtering, while the first is
not.
Before this change, we were using the same set of state events that we are
returning to the client (cases 3/4) to do the analysis for cases 1/2.
In a compliant implementation, this would result in us missing some
relevant member events in 1/2 in addition to seeing redundant member
events. In current conduwuit this is not the case because the set of
events that we return to the client is always a superset of the set that
is needed for cases 1/2. This is because we don't support filtering, and
we have an existing bug[1] where we are returning the delta between
`since` and the end of `timeline` rather than the start.
[1]: https://github.com/girlbossceo/conduwuit/issues/361
Fixing this is necessary to implement filtering because otherwise
we would start missing some member events for member count or encrypted
device updates if the relevant member events are rejected by the filter.
This would be much worse than our current behavior.
This cache can serve invalid responses, and has an extremely low hit
rate.
It serves invalid responses because because it's only keyed off
the `since` parameter, but many of the other request parameters also
affect the response or it's side effects. This will become worse once we
implement filtering, because there will be a wider space of parameters
with different responses. This problem is fixable, but not worth it
because of the low hit rate.
The low hit rate is because normal clients will always issue the next
sync request with `since` set to the `prev_batch` value of the previous
response. The only time we expect to see multiple requests with the same
`since` is when the response is empty, but we don't cache empty
responses.
This was confirmed experimentally by logging cache hits and misses over
15 minutes with a wide variety of clients. This test was run on
matrix.computer.surgery, which has only a few active users, but a
large volume of sync traffic from many rooms. Over the test period, we
had 3 hits and 5309 misses. All hits occurred in the first minute, so I
suspect that they had something to do with client recovery from an
offline state. The clients that were connected during the test are:
- element web
- schildichat web
- iamb
- gomuks
- nheko
- fractal
- fluffychat web
- fluffychat android
- cinny web
- element android
- element X android
Fixes: #336
the namespace check on username login is unnecessary, hashes aren't ever
going to match, and axum auth handles this kind of stuff already
Signed-off-by: strawberry <strawberry@puppygock.gay>
This reverts commit 321a6ca0fe.
These checks were not working as intended, resulting in the unban button not working
The join check gets kept since it slightly reduces the amount of sent joins in some cases
This check will probably be replaced soon for a more universal solution to the "made no change" issue
Signed-off-by: morguldir <morguldir@protonmail.com>
Only normal users should be prevented from creating an alias within an
exclusive namespace, not the appservice itself. This mirrors the
behaviour in api/client_server/room.rs on room creation.
this makes `CONDUWUIT_WELL_KNOWN__CLIENT` a valid env variable config
option as it would normally exist under `[well_known.client]` in toml
Signed-off-by: strawberry <strawberry@puppygock.gay>
after getting the shared rooms with the target user, we actually only
get the presence of ourselves instead of the requested user
Signed-off-by: strawberry <strawberry@puppygock.gay>