Compare commits

..

1283 commits

Author SHA1 Message Date
Timo Kösters
3c93c81204 Merge branch 'docs' into 'next'
fix: config options for well_known have changed

Closes #474

See merge request famedly/conduit!723
2024-10-07 00:00:34 +00:00
Timo Kösters
6767ca8bc8
fix: config options for well_known have changed 2024-10-07 00:03:35 +02:00
Timo Kösters
f8d7ef04e6 Merge branch 'bump' into 'next'
Bump version

See merge request famedly/conduit!722
2024-10-06 14:17:22 +00:00
Timo Kösters
892fb8846a
Bump version 2024-10-06 14:18:54 +02:00
Timo Kösters
bca8d1f70f Merge branch 'mediafixes' into 'next'
fix: old media used spaces in content disposition without quotes

See merge request famedly/conduit!717
2024-09-25 11:53:05 +00:00
Timo Kösters
65fe6b0ab5
fix: Empty content dispositions could create problems 2024-09-25 09:06:43 +02:00
Timo Kösters
fea85b0894
fix: Migration typo for media 2024-09-24 23:07:19 +02:00
Timo Kösters
a7405cddc0
fix: Matrix media repo 2024-09-24 19:43:26 +02:00
Timo Kösters
3df21e8257
fix: old media used spaces in content disposition without quotes 2024-09-24 16:46:32 +02:00
Timo Kösters
e4d6202840 Merge branch 'config-env-var-split' into 'next'
feat(config): split on __, allowing for setting individual values in a table

See merge request famedly/conduit!706
2024-09-21 14:51:39 +00:00
Timo Kösters
c4810a3a08 Merge branch 'well-know-fix' into 'next'
Fix parsing of CONFIG_WELL_KNOW_* env variables

See merge request famedly/conduit!718
2024-09-21 14:50:25 +00:00
Timo Kösters
73d0536cd3 Merge branch 'braid/msc3916-verson-response' into 'next'
fix: add missing msc3916 unstable feature in version response

Closes #473

See merge request famedly/conduit!720
2024-09-21 14:49:16 +00:00
The one with the braid
a6797ca0a2 fix: add missing msc3916 unstable feature in version response
Fixes: #473

Signed-off-by: The one with the braid <info@braid.business>
2024-09-21 10:54:01 +02:00
Leonardo José
cdd03dfec0 Fix parsing of CONFIG_WELL_KNOW_* env variables into the internal configuration object 2024-09-12 03:33:49 +00:00
Matthias Ahouansou
2bab8869d0 Merge branch 'authenticated-media' into 'next'
feat: authenticated media

See merge request famedly/conduit!716
2024-08-28 10:59:41 +00:00
Matthias Ahouansou
cbd3b07ca7
feat(media): use authenticated endpoints when fetching remote media 2024-08-28 11:25:28 +01:00
Matthias Ahouansou
27d6d94355
feat: add support for authenticated media requests 2024-08-28 11:25:28 +01:00
Matthias Ahouansou
a3716a7d5a
chore: upgrade request client matrix version
this is needed so that new endpoints use stable paths
2024-08-28 11:25:28 +01:00
Matthias Ahouansou
a9c3867287 Merge branch 'content-disposition-type' into 'next'
chore: Use Content-Disposition type for HTTP header

Closes #463

See merge request famedly/conduit!713
2024-08-22 18:36:25 +00:00
avdb13
423b0928d5
use ruma content disposition type in place of string
Co-Authored-By: Matthias Ahouansou <matthias@ahouansou.cz>
2024-08-22 19:03:32 +01:00
Matthias Ahouansou
44dd21f432 Merge branch 'TheDidek1-next-patch-00204' into 'next'
Update docs to point at new Synapse location

See merge request famedly/conduit!715
2024-07-19 08:58:05 +00:00
Dawid Rejowski
75a0f68349 Update docs to point at new Synapse location 2024-07-18 19:58:27 +00:00
Matthias Ahouansou
8abab8c8a0 Merge branch 'Crftbt-next-patch-43247' into 'next'
Update docker.md specifying port so that others don't also run into trying to...

See merge request famedly/conduit!712
2024-07-10 08:36:34 +00:00
Craft
324e1beabf Update docker.md specifying port so that others don't also run into trying to figure this out when following this md. 2024-07-09 21:49:55 +00:00
Matthias Ahouansou
00c9ef7b56 Merge branch 'send-invalid-pdu-correction' into 'next'
fix: don't fail the entire request if any PDU's format is invalid

See merge request famedly/conduit!709
2024-07-07 21:13:38 +00:00
Matthias Ahouansou
6455e918be
fix: don't always assume ruma can generate reference hashes 2024-07-07 21:40:56 +01:00
Matthias Ahouansou
ea3e7045b4
fix: don't fail the entire transaction if any PDU's format is invalid 2024-07-07 21:40:37 +01:00
Matthias Ahouansou
b8a1b4fee5 Merge branch 'remove-tls-override-when-no-srv-response' into 'next'
fix: remove TLS name override when no SRV record is present (but properly)

See merge request famedly/conduit!708
2024-07-06 16:49:38 +00:00
Matthias Ahouansou
d95345377b
fix: remove TLS name override when no SRV record is present (but properly)
The previous attempt only did so when no IP could be resolved, which isn't enough
2024-07-06 17:31:31 +01:00
Matthias Ahouansou
75322af8c7 Merge branch 'remove-tls-override-when-no-srv-response' into 'next'
fix: remove TLS name override when no SRV record is present

See merge request famedly/conduit!707
2024-07-06 16:24:32 +00:00
Matthias Ahouansou
11187b3fad
fix: remove TLS name override when no SRV record is present
this could have been an issue in cases where there was previously a SRV record, but later got removed
2024-07-06 17:06:11 +01:00
Matthias Ahouansou
35ed731a46
feat(config): split on __, allowing for setting individual values in a table 2024-07-02 15:39:45 +01:00
Matthias Ahouansou
1f313c6807 Merge branch 'finite-servername-cache' into 'next'
fix: don't cache server name lookups indefinitely

See merge request famedly/conduit!702
2024-07-01 09:52:18 +00:00
Matthias Ahouansou
e70d27af98 Merge branch 'timestamped-messaging' into 'next'
feat(appservice): support timestamped messaging

See merge request famedly/conduit!703
2024-07-01 09:36:14 +00:00
Matthias Ahouansou
ba8429cafe
fix: don't cache server name lookups indefinitely 2024-07-01 10:17:01 +01:00
Matthias Ahouansou
7a4d0f6fe8 Merge branch 'acl-dont-have-empty-exception' into 'next'
fix: don't ignore ACLs when there is no content

See merge request famedly/conduit!705
2024-06-26 21:41:42 +00:00
Matthias Ahouansou
2f45a907f9
fix: don't ignore ACLs when there is no content
despite this being very bad behavior, it is required by the spec
2024-06-26 22:06:46 +01:00
Matthias Ahouansou
de0deda179 Merge branch 'bump-ruma' into 'next'
chore: bump ruma

Closes #447

See merge request famedly/conduit!704
2024-06-25 09:43:15 +00:00
Matthias Ahouansou
62f1da053f
feat(appservice): support timestamped messaging 2024-06-25 10:25:58 +01:00
Matthias Ahouansou
602c56cae9
chore: bump ruma 2024-06-25 10:10:53 +01:00
Matthias Ahouansou
4b9520b5ad Merge branch 'bump-rust' into 'next'
chore: bump rust to 1.79.0 and apply new lints

See merge request famedly/conduit!700
2024-06-21 07:54:00 +00:00
Matthias Ahouansou
9014e43ce1
chore: bump rust to 1.79.0 and apply new lints 2024-06-21 08:29:33 +01:00
Matthias Ahouansou
ffc57f8997 Merge branch 'nightly-rustfmt' into 'next'
ci: use nightly rustfmt

See merge request famedly/conduit!699
2024-06-16 16:44:51 +00:00
Matthias Ahouansou
fd19dda5cb
ci: use nightly rustfmt
we were using this before, but it broke when refactoring the flake out into separate files
2024-06-16 17:28:05 +01:00
Matthias Ahouansou
dc0fa09a57 Merge branch 'bump' into 'next'
chore: bump version to 0.9.0-alpha

See merge request famedly/conduit!698
2024-06-14 12:02:56 +00:00
Matthias Ahouansou
ba1138aaa3
chore: bump version to 0.9.0-alpha 2024-06-14 12:33:40 +01:00
Matthias Ahouansou
6398136163 Merge branch 'debian-aarch64' into 'next'
ci: build for Debian aarch64

See merge request famedly/conduit!692
2024-06-14 11:10:59 +00:00
Matthias Ahouansou
16af8b58ae
ci: build for Debian aarch64 2024-06-13 09:32:09 +01:00
Timo Kösters
7a5b893013
Bump version 2024-06-12 19:43:18 +02:00
Matthias Ahouansou
c453d45598
fix(keys): only use keys valid at the time of PDU or transaction, and actually refresh keys
Previously, we only fetched keys once, only requesting them again if we have any missing, allowing for ancient keys to be used to sign PDUs and transactions
Now we refresh keys that either have or are about to expire, preventing attacks that make use of leaked private keys of a homeserver
We also ensure that when validating PDUs or transactions, that they are valid at the origin_server_ts or time of us receiving the transaction respectfully
As to not break event authorization for old rooms, we need to keep old keys around
We move verify_keys which we no longer see in direct requests to the origin to old_verify_keys
We keep old_verify_keys indefinitely as mentioned above, as to not break event authorization (at least until a future MSC addresses this)
2024-06-12 19:41:43 +02:00
Matthias Ahouansou
144d548ef7
fix: permission checks for aliases 2024-06-12 19:41:31 +02:00
Benjamin Lee
7b259272ce
fix: do not return redacted events from search 2024-06-12 19:41:02 +02:00
Matthias Ahouansou
48c1f3bdba
fix: userid checks for incoming EDUs 2024-06-12 19:39:27 +02:00
Timo Kösters
dd19877528 Merge branch 'bump-ruma' into 'next'
chore: bump all dependencies

See merge request famedly/conduit!627
2024-06-11 20:59:58 +00:00
Matthias Ahouansou
ba2a5a6115
chore: bump all dependencies 2024-06-11 20:35:56 +01:00
Matthias Ahouansou
a36ccff06a Merge branch 'security-readme' into 'next'
docs: add security disclosure instructions

See merge request famedly/conduit!691
2024-06-06 21:21:07 +00:00
Matthias Ahouansou
39b4932725
docs: add security disclosure instructions 2024-06-06 21:48:45 +01:00
Matthias Ahouansou
c45e52f45a Merge branch 'media-csp' into 'next'
fix(media): use csp instead of modifying content-type

See merge request famedly/conduit!689
2024-06-04 05:31:35 +00:00
Matthias Ahouansou
1dbb3433e0
fix(media): use csp instead of modifying content-type 2024-06-03 21:40:25 +01:00
Matthias Ahouansou
efecb78888 Merge branch 'local-event-non-restricted-room-vers' into 'next'
fix(membership): fallback to locally signed event if the join wasn't a restricted one on send_join response

See merge request famedly/conduit!680
2024-06-03 13:28:41 +00:00
Matthias Ahouansou
f25a0b49eb Merge branch 'recurse-relationships' into 'next'
feat: recurse relationships

See merge request famedly/conduit!613
2024-06-03 13:19:16 +00:00
Matthias Ahouansou
b46000fadc
feat: recurse relationships 2024-06-03 13:42:52 +01:00
Matthias Ahouansou
7b19618136 Merge branch 'server-user-globals' into 'next'
refactor: add server_user to globals

See merge request famedly/conduit!686
2024-05-31 21:27:26 +00:00
Matthias Ahouansou
19154a9f70
refactor: add server_user to globals 2024-05-31 21:56:11 +01:00
Matthias Ahouansou
ec8dfc283c
fix(membership): fallback to locally signed event if the join wasn't a restricted one on send_join response 2024-05-31 16:37:06 +01:00
Matthias Ahouansou
be1b8b68a7 Merge branch 'remove-alias-command' into 'next'
feat(admin): remove alias command

See merge request famedly/conduit!685
2024-05-29 17:05:45 +00:00
Matthias Ahouansou
6c2eb4c786
feat(admin): remove alias command 2024-05-29 17:49:51 +01:00
Matthias Ahouansou
3df791e030 Merge branch 'ruma-server-util' into 'next'
refactor: let ruma-server-util handle X-Matrix parsing

See merge request famedly/conduit!684
2024-05-29 13:16:08 +00:00
Matthias Ahouansou
9374b74e77
refactor: let ruma-server-util handle X-Matrix parsing 2024-05-29 12:27:37 +01:00
Matthias Ahouansou
c732c7c97f Merge branch 'toggle_allow_register' into 'next'
add command to set the allow registration status

See merge request famedly/conduit!477
2024-05-29 09:08:59 +00:00
Matthias Ahouansou
33c9da75ec Merge branch 'clarify-3pids-are-unsupported' into 'next'
fix: clarify that 3pids are currently unsupported

See merge request famedly/conduit!683
2024-05-29 08:52:59 +00:00
Matthias Ahouansou
59d7674b2a
fix: clarify that 3pids are currently unsupported 2024-05-29 09:36:35 +01:00
tony
6bcc2f80b8
add command to set the allow registration status
Co-Authored-By: Matthias Ahouansou <matthias@ahouansou.cz>
2024-05-29 09:25:08 +01:00
Matthias Ahouansou
817f382c5f Merge branch 'openid-api' into 'next'
feat: support OpenID endpoints

Closes #453

See merge request famedly/conduit!681
2024-05-28 15:11:03 +00:00
mikoto
a888c7cb16
OpenID routes
Co-Authored-By: Matthias Ahouansou <matthias@ahouansou.cz>
2024-05-28 15:39:19 +01:00
Timo Kösters
47aadcea1d Merge branch 'membership-reason-fixes' into 'next'
fix(membership): always set reason & allow new events if reason changed

Closes #452

See merge request famedly/conduit!669
2024-05-26 07:22:29 +00:00
Matthias Ahouansou
9b8ec21e6e Merge branch 'admin-faq' into 'next'
docs(faq): add instructions on how to make a user admin

See merge request famedly/conduit!677
2024-05-14 20:37:21 +00:00
Matthias Ahouansou
e51f60e437
docs(faq): add instructions on how to make a user admin 2024-05-14 21:20:16 +01:00
Matthias Ahouansou
11990e7524 Merge branch 'admin-hash-sign' into 'next'
feat(admin): add hash-and-sign-event command

See merge request famedly/conduit!670
2024-05-09 16:19:40 +00:00
Matthias Ahouansou
3ad7675bbf Merge branch 'format-toml' into 'next'
style: format all toml with taplo

See merge request famedly/conduit!673
2024-05-06 20:16:13 +00:00
Matthias Ahouansou
e2d91e26d6
style: format all toml with taplo 2024-05-06 20:57:56 +01:00
Matthias Ahouansou
20d9f3fd5d Merge branch 'media' into 'next'
fix: make media response match spec

See merge request famedly/conduit!672
2024-05-06 18:37:13 +00:00
Timo Kösters
965b6df83d
fix: make media response match spec 2024-05-06 20:05:51 +02:00
Matthias Ahouansou
8876d54d78
feat(admin): add hash-and-sign-event command 2024-05-05 17:35:02 +01:00
Matthias Ahouansou
d8badaf64b
fix(membership): always set reason & allow new events if reason changed 2024-05-05 15:28:18 +01:00
Matthias Ahouansou
08485ea5e4 Merge branch 'bump-rust-nix' into 'next'
chore: bump rust & nix

See merge request famedly/conduit!668
2024-05-05 12:48:36 +00:00
Matthias Ahouansou
eec9b9ed87
chore: bump nix 2024-05-05 13:28:00 +01:00
Matthias Ahouansou
256dae983b
chore: bump rust
and fix new lints that come with it
2024-05-05 13:27:56 +01:00
Matthias Ahouansou
79c4bb17ca Merge branch 'build-and-cache-everything' into 'next'
Draft: build and cache all packages and CI dependencies

See merge request famedly/conduit!667
2024-05-05 11:00:12 +00:00
Matthias Ahouansou
57a24f234d
Revert "ci: temporarily disable CONDUIT_VERSION_EXTRA"
This reverts commit 2a2b9554c8.
2024-05-05 11:43:06 +01:00
Charles Hall
a4c973e57e
build and cache all packages and CI dependencies
This fixes the problem where some artifacts were not being cached when
they should have been. The secret sauce is the  `nix-store` command.
2024-05-05 10:35:31 +01:00
Matthias Ahouansou
f9953c31fc
Revert "ci: use sh instead of bash"
This reverts commit 70b07dfabf.
2024-05-05 09:43:32 +01:00
Matthias Ahouansou
6b669d2f4d Merge branch 'only-cache-attic-when-nix' into 'next'
ci: only cache attic when nix is available

See merge request famedly/conduit!666
2024-05-05 08:19:15 +00:00
Matthias Ahouansou
90c9794221 Merge branch 'sh-not-bash' into 'next'
ci: use sh instead of bash

See merge request famedly/conduit!665
2024-05-05 08:12:20 +00:00
Matthias Ahouansou
358164f49d
ci: only cache attic when nix is available 2024-05-05 09:01:40 +01:00
Matthias Ahouansou
70b07dfabf
ci: use sh instead of bash 2024-05-05 08:53:53 +01:00
Matthias Ahouansou
d97f5aa3b8 Merge branch 'ci/fastzip' into 'next'
ci: faster cache and artifact handling

See merge request famedly/conduit!664
2024-05-05 05:54:18 +00:00
Matthias Ahouansou
bed9072a69 Merge branch 'always-cache-attic' into 'next'
ci: prevent unnecessary rebuilds

See merge request famedly/conduit!662
2024-05-05 05:54:16 +00:00
Matthias Ahouansou
2a2b9554c8
ci: temporarily disable CONDUIT_VERSION_EXTRA 2024-05-04 21:50:37 +01:00
Matthias Ahouansou
3d9d975a9f
ci: avoid rebuilding bindgen and friends 2024-05-04 15:17:02 +01:00
Charles Hall
96cc1f6abd
only set CONDUIT_VERSION_EXTRA for final build
This prevents us from needing to recompile the dependencies when that
environment variable changes, which generally changes on every commit,
which is far more frequently than the dependencies are actually changed.
2024-05-04 13:09:16 +01:00
Charles Hall
55259329a3
make it easy to configure cargo profiles from nix
This way you can easily build in debug mode with Nix.
2024-05-04 13:09:15 +01:00
Charles Hall
d796fe7cd3
make it easy to configure cargo features from nix
Users of the nix package can now just use `.override` to choose what
features they want.

This also makes RocksDB automatically use jemalloc when Conduit is
configured to use jemalloc.
2024-05-04 13:09:14 +01:00
Charles Hall
3336d3f812
factor out nix code into new files via makeScope
This makes the Nix code a lot easier to reason about.
2024-05-04 12:48:49 +01:00
Charles Hall
c9ee4a920e
always go through inputs
This way we don't have to modify the destructuring of outputs' argument when adding or removing inputs.
2024-05-04 10:03:49 +01:00
Samuel Meenzen
5570f5f3da
ci: faster cache and artifact handling 2024-05-02 15:08:40 +02:00
Matthias Ahouansou
38200163d6 Merge branch 'error-create-remote-user' into 'next'
fix(admin): don't allow creation of remote users

See merge request famedly/conduit!663
2024-05-02 10:39:46 +00:00
Matthias Ahouansou
9db1f5a13c
fix(admin): don't allow creation of remote users 2024-05-02 10:45:04 +01:00
Matthias Ahouansou
0074aca0ef Merge branch '244-support-well-known' into 'next'
feat: add .well-known support

Closes #244 and #378

See merge request famedly/conduit!332
2024-05-02 09:35:14 +00:00
Matthias Ahouansou
3ecf835b50
ci: cache attic on artifacts job 2024-05-02 09:53:53 +01:00
Matthias Ahouansou
6e913bfec4
docs: delegation 2024-05-02 09:41:27 +01:00
Jakub Kubík
c1f695653b
feat: support hosting .well-known from Conduit
Co-authored-by: Matthias Ahouansou <matthias@ahouansou.cz>
2024-05-02 09:26:43 +01:00
Matthias Ahouansou
e6b6cc77d1 Merge branch 'verify-x-matrix-destination' into 'next'
feat(auth): check if X-Matrix destination is correct if present

Closes #271

See merge request famedly/conduit!661
2024-05-02 07:15:20 +00:00
Matthias Ahouansou
b69a74961b Merge branch 'x-matrix-destination-header' into 'next'
feat(federation): add destination field to X-Matrix header

See merge request famedly/conduit!660
2024-05-02 06:59:52 +00:00
Matthias Ahouansou
622aa574b6 Merge branch 'namespace-fixes' into 'next'
fix(appservices): don't forward events relating to remote users, and forward events relating to remote aliases

See merge request famedly/conduit!653
2024-05-02 06:28:01 +00:00
Matthias Ahouansou
63ba157ef6
feat(auth): check if X-Matrix destination is correct if present 2024-05-02 07:14:44 +01:00
Matthias Ahouansou
dfe2916357
feat(federation): add destination field to X-Matrix header 2024-05-02 07:01:04 +01:00
Matthias Ahouansou
b4a60c3f9a Merge branch 'ci' into 'next'
ci: use attic.conduit.rs

See merge request famedly/conduit!658
2024-05-02 05:56:19 +00:00
Matthias Ahouansou
0b217232ac
ci: push attic to binary cache, update nix script 2024-05-01 21:55:28 +01:00
Timo Kösters
3b7c001350
ci: use attic.conduit.rs 2024-05-01 11:26:12 +02:00
Matthias Ahouansou
10412d4be9 Merge branch 'master' into 'next'
Pull v0.7.0 tag into next

See merge request famedly/conduit!657
2024-04-28 12:28:25 +00:00
Matthias Ahouansou
3c8879771c Merge branch 'docker-update-request-size' into 'next'
docs(docker): don't use underscores for max request size

Closes #289

See merge request famedly/conduit!656
2024-04-28 11:47:04 +00:00
Matthias Ahouansou
4fac7ca169 Merge branch 'backup-faq-docs' into 'next'
docs(faq): add backup instructions

See merge request famedly/conduit!655
2024-04-28 11:08:22 +00:00
Matthias Ahouansou
a23bfdb3f0
docs(docker): don't use underscores for max request size 2024-04-28 11:48:42 +01:00
Matthias Ahouansou
60fe238893
docs(faq): add backup instructions 2024-04-28 11:16:21 +01:00
Matthias Ahouansou
cd4e4fe7ac Merge branch 'Daniel15au-next-patch-38459' into 'next'
[docs] Remove references to legacy Docker Compose v1

See merge request famedly/conduit!642
2024-04-27 21:30:39 +00:00
Matthias Ahouansou
0a7ac058a0 Merge branch 'docs-migrate-faq-fix' into 'next'
docs(faq): correct answer about migration

See merge request famedly/conduit!654
2024-04-27 20:56:42 +00:00
Daniel Lo Nigro
c90e4816b7
[docs] Update docker-compose commands 2024-04-27 21:32:06 +01:00
Matthias Ahouansou
2b5295aa29
docs(faq): correct answer about migration 2024-04-27 20:55:19 +01:00
Matthias Ahouansou
df0ad2d07c
fix(appservices): don't forward events relating to remote users, and forward events relating to remote aliases 2024-04-27 20:41:28 +01:00
Matthias Ahouansou
bd5d9a7560 Merge branch 'stun-spelling' into 'next'
docs: fix STUN typo

See merge request famedly/conduit!652
2024-04-27 11:36:26 +00:00
Matthias Ahouansou
14ede9898d Merge branch 'faq' into 'next'
docs: add FAQ

See merge request famedly/conduit!651
2024-04-27 11:31:57 +00:00
Matthias Ahouansou
0220e9e9d1 Merge branch 'update-deps' into 'next'
Update trivial dependencies

See merge request famedly/conduit!650
2024-04-27 10:49:33 +00:00
Matthias Ahouansou
f62db723f7
docs: fix STUN typo 2024-04-27 11:32:20 +01:00
Matthias Ahouansou
a499c80d1b
docs: add FAQ 2024-04-27 11:27:56 +01:00
Matthias Ahouansou
5760d98192
chore: upgrade rocksdb in flake 2024-04-27 10:52:24 +01:00
Matthias Ahouansou
2d3f64c1e5
chore: upgrade lockfile 2024-04-27 10:45:53 +01:00
Ossi Herrala
aff97e4032
Update image crate 2024-04-27 11:15:04 +03:00
Ossi Herrala
a56139549f
Trust-DNS has been renamed to Hickory-DNS 2024-04-27 11:14:59 +03:00
Ossi Herrala
3b6928ebcf
Update dependencies that don't need code changes 2024-04-27 11:13:30 +03:00
Ossi Herrala
61cd2892b8
Remove unused dependencies 2024-04-27 11:13:30 +03:00
Timo Kösters
acef61a3cc Merge branch 'bump' into 'next'
Bump version to v0.8.0-alpha

See merge request famedly/conduit!647
2024-04-25 20:00:26 +00:00
Timo Kösters
c6a7563126 Merge branch 'docs' into 'next'
Update download links in documentation

See merge request famedly/conduit!648
2024-04-25 08:02:34 +00:00
Timo Kösters
779cebcd77
Update download links in documentation 2024-04-25 10:01:55 +02:00
Timo Kösters
3b3466fd51
Bump version to v0.8.0-alpha 2024-04-25 09:19:44 +02:00
Timo Kösters
a854ce5cf6 Merge branch 'next' into 'master'
Merge branch 'next' into 'master'

See merge request famedly/conduit!646
2024-04-24 23:23:33 +00:00
Timo Kösters
414056442a Merge branch 'bump' into 'next'
Bump version to v0.7.0

See merge request famedly/conduit!645
2024-04-24 22:22:56 +00:00
Timo Kösters
7c83372336 Merge branch 'exclusive-namespace-error' into 'next'
feat(appservice): ensure users/aliases outside of namespaces are not accessed

See merge request famedly/conduit!634
2024-04-24 21:39:20 +00:00
Timo Kösters
a854a46c24
Bump version to v0.7.0 2024-04-24 23:24:20 +02:00
Timo Kösters
429f80548f Merge branch 'sync-up-debian-generated-config' into 'next'
Sync up the generated Conduit config for Debian

See merge request famedly/conduit!644
2024-04-24 21:01:58 +00:00
Timo Kösters
a140bf8a6f Merge branch 'authorized-user-search' into 'next'
fix(membership): perform stricter checks when choosing an authorized user

See merge request famedly/conduit!620
2024-04-24 21:00:52 +00:00
Matthias Ahouansou
74db555336
fix(membership): perform stricter checks when choosing an authorized user 2024-04-24 20:54:07 +01:00
Timo Kösters
08636ef236 Merge branch 'can-invite-state-lock' into 'next'
fix(state-accessor): hold the state_lock when checking if a user can invite

See merge request famedly/conduit!643
2024-04-24 19:29:01 +00:00
Matthias Ahouansou
3086271139
feat(appservice): ensure users/aliases outside of namespaces are not accessed 2024-04-24 19:51:28 +01:00
Matthias Ahouansou
e40aed3a7d
fix(state-accessor): hold the state_lock when checking if a user can invite 2024-04-24 19:17:00 +01:00
Paul van Tilburg
0c0c9549b9
Sync up the generated Conduit config for Debian
This applies changes made in the exampl config by commits dc89fbe and
844508b.
2024-04-24 20:12:38 +02:00
Timo Kösters
53d3f9ae89 Merge branch 'registration-token-in-config' into 'next'
add registration_token in default cfg and DEPLOY

See merge request famedly/conduit!557
2024-04-24 18:09:47 +00:00
Timo Kösters
7ace9b0dff Merge branch 'check-if-membership-is-case-endpoints' into 'next'
feat(membership): check if user already has the membership that is requested to be set

See merge request famedly/conduit!622
2024-04-24 18:02:08 +00:00
Timo Kösters
624654a88b Merge branch 'fix-unrejectable-invites' into 'next'
Fix unrejectable invites

Closes #418

See merge request famedly/conduit!623
2024-04-24 18:01:48 +00:00
Timo Kösters
461236f3fb Merge branch 'room-v11' into 'next'
Add support for room v11

Closes #408

See merge request famedly/conduit!562
2024-04-24 10:48:12 +00:00
Matthias Ahouansou
1c4ae8d268
fix(redaction): use content.redacts when checking v11 events 2024-04-24 10:52:33 +01:00
Valentin Lorentz
89c1c2109c Link to the specification from user_can_redact's documentation 2024-04-24 08:29:47 +02:00
Matthias Ahouansou
1bae8b35a9 Merge branch 'document-all-configuration' into 'next'
Document all configuration

Closes #435

See merge request famedly/conduit!635
2024-04-23 22:08:03 +00:00
Matthias Ahouansou
00d6aeddb6
refactor(redactions): move checks inside conduit
ruma was already accidentally performing these checks for us, but this shouldn't be the case
2024-04-23 23:05:27 +01:00
Matthias Ahouansou
18f93ae8f3 Merge branch 'no-identity-assertion-optional-auth' into 'next'
fix(appservices): don't perform identity assertion when auth is optional

Closes #430

See merge request famedly/conduit!641
2024-04-23 21:05:01 +00:00
Matthias Ahouansou
6c9c1b5afe
fix(appservices): don't perform identity assertion when auth is optional 2024-04-22 10:33:12 +01:00
Charles Hall
27753b1d96 Merge branch 'updates' into 'next'
Update crane and rocskdb

See merge request famedly/conduit!640
2024-04-21 20:30:48 +00:00
Charles Hall
61cb186b5b
update rocksdb 2024-04-21 12:39:27 -07:00
Charles Hall
8c6ffb6bfc
unpin crane because the bug was fixed
Flake lock file updates:

• Updated input 'crane':
    'github:ipetkov/crane/2c653e4478476a52c6aa3ac0495e4dea7449ea0e?narHash=sha256-XoXRS%2B5whotelr1rHiZle5t5hDg9kpguS5yk8c8qzOc%3D' (2024-02-11)
  → 'github:ipetkov/crane/55f4939ac59ff8f89c6a4029730a2d49ea09105f?narHash=sha256-Vz1KRVTzU3ClBfyhOj8gOehZk21q58T1YsXC30V23PU%3D' (2024-04-21)
2024-04-21 12:31:57 -07:00
Awiteb
2656f6f435
feat(docs): Document all configuration options
Fixes: https://gitlab.com/famedly/conduit/-/issues/435
Suggested-by: Matthias Ahouansou <matthias@ahouansou.cz>
Helped-by: Matthias Ahouansou <matthias@ahouansou.cz>
Signed-off-by: Awiteb <a@4rs.nl>
2024-04-20 23:40:04 +03:00
Awiteb
b48e1300f2
chore(docs): Rename configuration section to Configuration 2024-04-20 23:37:00 +03:00
Timo Kösters
1474b94db6 Merge branch 'disable-federation-router' into 'next'
refactor: disable federation at the router level

See merge request famedly/conduit!629
2024-04-20 20:28:52 +00:00
Charles Hall
e19e6a4bff Merge branch 'default-room-version-10' into 'next'
chore(config): bump default room version to v10

See merge request famedly/conduit!628
2024-04-20 18:47:56 +00:00
Timo Kösters
95f5c61843 Merge branch 'remove-default-database-config' into 'next'
chore: remove default database backend

See merge request famedly/conduit!636
2024-04-18 20:55:03 +00:00
Matthias Ahouansou
4b288fd22f
chore: remove default database backend
has been sqlite for far too long, and having a default for this is just asking for trouble
2024-04-18 20:49:50 +01:00
Valentin Lorentz
2d8c551cd5 Fix doc 2024-04-17 19:41:38 +02:00
Valentin Lorentz
eb6801290b Document copy_redacts 2024-04-17 19:37:32 +02:00
Matthias Ahouansou
7a7c09785e feat(pdu): copy top level redact to content and vice versa 2024-04-17 19:34:36 +02:00
Timo Kösters
d22bf5182b Merge branch 'token-auth-fixes' into 'next'
Token auth fixes

Closes #430

See merge request famedly/conduit!633
2024-04-15 19:46:48 +00:00
Matthias Ahouansou
54e0e2a14c
fix(appservices): don't use identity assertion on account management endpoints 2024-04-15 19:16:18 +01:00
Matthias Ahouansou
475a68cbb9
refactor: disable federation at the router level 2024-04-13 10:39:32 +01:00
Matthias Ahouansou
92817213d5 Add missing import 2024-04-12 05:15:37 +00:00
Matthias Ahouansou
ab8592526f Replace panic!() with unreachable!() 2024-04-12 05:14:39 +00:00
Matthias Ahouansou
561a103140
chore(config): bump default room version to v10 2024-04-11 22:55:18 +01:00
Val Lorentz
b5e21f761b Merge branch 'next' into 'room-v11'
# Conflicts:
#   src/service/rooms/timeline/mod.rs
#   src/utils/error.rs
2024-04-11 17:34:42 +00:00
Matthias Ahouansou
9e6ce8326f Remove TODO 2024-04-11 17:21:00 +00:00
Matthias Ahouansou
e88d137bd7 Replace panic!() with unreachable!() 2024-04-11 17:19:42 +00:00
Charles Hall
7f63948db9 Merge branch 'version-extra' into 'next'
allow including extra info in `--version` output

See merge request famedly/conduit!601
2024-04-11 14:37:37 +00:00
Timo Kösters
f16bff2466 Merge branch 'user_can_membership' into 'next'
refactor(state_accessor): add method to check if a user can invite another user

See merge request famedly/conduit!621
2024-04-06 14:27:20 +00:00
Timo Kösters
e8796d6bf9 Merge branch 'admin-check-remote-users' into 'next'
fix: do not allow administration of remote users

Closes #377

See merge request famedly/conduit!614
2024-04-06 13:21:29 +00:00
Matthias Ahouansou
fe78cc8262
refactor(state_accessor): add method to check if a user can invite another user 2024-04-06 14:20:18 +01:00
Timo Kösters
03f9c888f0 Merge branch 'remove-join_authorized_via_users_servers' into 'next'
fix(membership): remove join_authorized_via_users_server field on state update

See merge request famedly/conduit!619
2024-04-06 13:20:01 +00:00
Matthias Ahouansou
2c73c3adbb
fix(sync): send phoney leave event where room state is unknown on invite rejection 2024-04-06 14:12:18 +01:00
Matthias Ahouansou
9497713a79
fix(membership): check if server is in room to decide whether to do remote leaves 2024-04-06 14:10:11 +01:00
Matthias Ahouansou
110b7e10e6
fix: do not allow administration of remote users 2024-04-05 10:56:28 +01:00
Timo Kösters
6c3ce71304 Merge branch 'dont-expect-reqwest-http-request' into 'next'
fix: do not expect that all http requests are valid reqwest requests

Closes #396

See merge request famedly/conduit!611
2024-04-05 09:53:14 +00:00
Matthias Ahouansou
fb4217486f
feat(membership): check if user already has the membership that is requested to be set 2024-04-05 10:21:44 +01:00
Charles Hall
dc23206e27 Merge branch 'nix-0.28' into 'next'
chore: upgrade nix to 0.28

See merge request famedly/conduit!615
2024-04-04 04:34:46 +00:00
Matthias Ahouansou
0f6b771cdd
fix(membership): remove join_authorized_via_users_server field on state update 2024-04-03 22:46:47 +01:00
Timo Kösters
24e9c99d47 Merge branch 'no-auth-ignore-token' into 'next'
fix: ignore access tokens where they are not needed

See merge request famedly/conduit!617
2024-04-02 18:18:59 +00:00
Matthias Ahouansou
0d62c9de7c
fix: ignore access tokens where they are not needed 2024-04-02 17:19:59 +01:00
Timo Kösters
33fb32be9a Merge branch 'srv-matrix-fed-record' into 'next'
feat: use _matrix-fed._tcp SRV record, fallback to _matrix._tcp

See merge request famedly/conduit!616
2024-04-01 21:30:58 +00:00
Matthias Ahouansou
e38af9b7fc
feat: use _matrix-fed._tcp SRV record, fallback to _matrix._tcp 2024-04-01 20:55:13 +01:00
Matthias Ahouansou
1c529529aa
chore: upgrade nix to 0.28
needed for musl targets on s390x
2024-04-01 13:36:38 +01:00
Timo Kösters
cf1e7bc1ed Merge branch 'unregister-fail-id-not-found' into 'next'
fix: return error when trying to unregister unknown appservice id

See merge request famedly/conduit!610
2024-03-31 21:26:31 +00:00
Matthias Ahouansou
3ce3d13378
fix: do not expect that all http requests are valid reqwest requests 2024-03-31 22:20:36 +01:00
Matthias Ahouansou
11612e347d
fix: return error when trying to unregister unknown appservice id 2024-03-31 21:24:15 +01:00
Timo Kösters
7aa70e2030 Merge branch 'error-appservice-token-auth' into 'next'
fix: reject requests with authentication when not used

Closes #430

See merge request famedly/conduit!608
2024-03-31 09:43:17 +00:00
Timo Kösters
71546a9fb7 Merge branch 'registration_appservice_token_check' into 'next'
fix: reject /register requests when there is no token and the type is appservice

Closes #430

See merge request famedly/conduit!609
2024-03-31 09:42:21 +00:00
Matthias Ahouansou
5c634ceb6b
fix: reject requests with authentication when not used 2024-03-30 16:50:21 +00:00
Matthias Ahouansou
8d70f69e62
fix: reject /register requests when there is no token and the type is appservice 2024-03-30 12:40:58 +00:00
Timo Kösters
9176474513 Merge branch 'ruma-registration-type' into 'next'
fix: don't panic if registration url is empty

See merge request famedly/conduit!583
2024-03-23 15:33:01 +00:00
Matthias Ahouansou
b20483aa13
refactor(appservices): avoid cloning frequently 2024-03-22 20:53:27 +00:00
Matthias Ahouansou
5c650bb67e
refactor: use BTreeMap for cached registration info 2024-03-22 17:52:47 +00:00
Timo Kösters
b11855e7a1 Merge branch 'performance' into 'next'
improvement: do not save typing edus in db

See merge request famedly/conduit!597
2024-03-22 08:32:40 +00:00
Timo Kösters
1fb5bcf98f
improvement: registration token now only works when registration is enabled 2024-03-22 09:26:11 +01:00
lafleur
34e0e710cb
add registration_token in default cfg and README 2024-03-22 09:17:42 +01:00
Timo Kösters
0bb28f60cf
refactor: minor appservice code cleanup 2024-03-22 08:59:36 +01:00
Timo Kösters
d2817679e5
refactor: remove previous typing implementation and add sync wakeup for new one 2024-03-22 08:24:17 +01:00
Timo Kösters
6bd7ff4917
improvement: do not save typing edus in db 2024-03-22 07:48:44 +01:00
Timo Kösters
bdae9ceccf Merge branch 'rocksdb' into 'next'
improvement: use simpler rocksdb config

See merge request famedly/conduit!602
2024-03-22 06:47:51 +00:00
Charles Hall
3ffdaaddcd Merge branch 'docs' into 'next'
reduce scope of the documentation

See merge request famedly/conduit!607
2024-03-21 23:08:36 +00:00
Charles Hall
5a4ee9808a
make chapter name reflect file name
Personally I think this makes more sense anyway.
2024-03-21 15:52:54 -07:00
Charles Hall
3dd21456ef
reduce scope of nixos documentation
There are so many ways to do this we realistically shouldn't bother
describing any of them, especially because people should be learning all
the options and choosing the one that suits them best anyway.
2024-03-21 15:52:54 -07:00
Charles Hall
f6bfba7014
normalize headers to "Conduit for X" 2024-03-21 15:52:54 -07:00
Charles Hall
f56abba216
rename "simple" deployment to "generic"
The main thing this section is really useful for is explaining how to
configure various reverse proxies, which applies to basically anything.

Also, remove all the language about this being "recommended", because
nothing in this documentation is actually tested in CI.
2024-03-21 15:52:54 -07:00
Charles Hall
2022efd279
remove section about cross compilation
It is very stale. Please just use Nix. Trying to do it outside of Nix
will be an exercise in frustration, I guarantee it.
2024-03-21 15:40:19 -07:00
Charles Hall
0a790686c5
avoid duplicating links in documentation
Because one might forget to update them. I did, initially, which is why
I'm making this change.
2024-03-21 15:40:19 -07:00
Charles Hall
68a33862b3
add mdbook to the devshell 2024-03-21 15:40:19 -07:00
Timo Kösters
879a8b969d
improvement: use simpler rocksdb config 2024-03-21 15:04:40 +01:00
Timo Kösters
81bc1fc4e3 Merge branch 'matrix-ecosystem-clients' into 'next'
docs: point people to the matrix client list instead of element

See merge request famedly/conduit!606
2024-03-18 18:14:32 +00:00
Timo Kösters
1931a45aba Merge branch 'disabled-federation-authcheck' into 'next'
refactor: check if federation is disabled inside the authcheck where possible

See merge request famedly/conduit!605
2024-03-18 18:10:21 +00:00
Matthias Ahouansou
5f0bea6961
refactor: check if federation is disabled inside the authcheck where possible 2024-03-18 09:24:37 +00:00
Matthias Ahouansou
120035685b
docs: point people to the matrix client list instead of element 2024-03-17 19:21:29 +00:00
Charles Hall
a8da61e5b7 Merge branch 'docs/mdbook-full' into 'next'
docs: build docs using mdBook and copy all markdown files

See merge request famedly/conduit!604
2024-03-17 04:06:58 +00:00
Samuel Meenzen
a3968725b4
chore: add EditorConfig 2024-03-16 20:01:16 -07:00
Charles Hall
6800e5fd18
build book in ci, deploy it to gitlab pages 2024-03-16 20:01:15 -07:00
Charles Hall
4f8d3953b3
add nix output for the book 2024-03-16 20:01:15 -07:00
Samuel Meenzen
425660472c
docs: build docs using mdBook 2024-03-16 20:01:15 -07:00
Charles Hall
ab98b52b21 Merge branch 'remove-log-modification' into 'next'
Remove log config modification

See merge request famedly/conduit!553
2024-03-12 22:06:32 +00:00
Charles Hall
ac22b1bed1
allow including extra info in --version output 2024-03-11 20:11:59 -07:00
Charles Hall
741ca63e94 Merge branch 'argparse' into 'next'
Add argument parser for the conduit executable

Closes #285

See merge request famedly/conduit!385
2024-03-11 18:03:39 +00:00
Max Cohen
9a81a49c6a
Add argument parser for the conduit executable
Allow fetching the version with `conduit --version`. Fixes #285.
2024-03-11 09:43:02 -07:00
Charles Hall
c42aeb506f Merge branch 'ci/avoid-duplicate-pipelines' into 'next'
fix(ci): avoid duplicate pipelines

See merge request famedly/conduit!600
2024-03-11 14:34:09 +00:00
Samuel Meenzen
4af691d737
fix(ci): avoid duplicate pipelines 2024-03-11 12:43:49 +01:00
Charles Hall
88fbd5b294 Merge branch 'rename-rocksdb-crate' into 'next'
rename the `rust-rocksdb` crate to just `rocksdb`

See merge request famedly/conduit!599
2024-03-11 07:03:53 +00:00
Charles Hall
d1bc7fcfd2
rename the rust-rocksdb crate to just rocksdb
This way the old `cfg`s still work and we don't need to constantly
remind ourselves what programming language we're using in `use`
statements.

Also fixes a problem where RocksDB users couldn't start Conduit because
the old `cfg`s were using the original crate's name instead of the
`backend_rocksdb` feature name for some reason. Maybe that should be
changed, but I'm not sure.
2024-03-10 23:40:11 -07:00
Charles Hall
dc89fbed3a
document log config syntax, don't give example
Because the old one was stale. Shocking!
2024-03-10 22:53:27 -07:00
Charles Hall
516876f8ef
remove final reference to sled in log config 2024-03-10 22:53:27 -07:00
Charles Hall
ed5bd23255
remove explicit references to log config
They're all stale. Sled was yote long ago.
2024-03-10 22:53:27 -07:00
Charles Hall
5f053a9357
link to example config instead of copying it
DRY FTW
2024-03-10 22:53:27 -07:00
Charles Hall
9ff9e85ebe
add newline to end of file
Please, people.
2024-03-10 22:33:54 -07:00
tezlm
daed4cdddf
Remove log config modification 2024-03-10 22:17:04 -07:00
Charles Hall
086c4daa38 Merge branch 'update-rocksdb' into 'next'
update rocksdb

See merge request famedly/conduit!577
2024-03-11 04:51:40 +00:00
Charles Hall
10f3f9da49
switch/update rocksdb crate
This fork was created because the original seems de-facto unmaintained.
2024-03-10 20:58:01 -07:00
Matthias Ahouansou
fa930182ae
fix(appservices): don't panic on empty registration url
perf(appservices): cache regex for namespaces
2024-03-10 13:27:48 +00:00
Charles Hall
a095e02d04 Merge branch 'ci/optional-artifacts' into 'next'
feat: run ci on demand to prevent unnecessary job executions

See merge request famedly/conduit!585
2024-03-07 18:33:14 +00:00
Samuel Meenzen
0d2f1348da
feat: run ci on demand to prevent unnecessary job executions 2024-03-06 12:39:16 +01:00
Charles Hall
20bb214d7e Merge branch 'ci-efficiency' into 'next'
make CI more efficient

See merge request famedly/conduit!596
2024-03-06 00:17:39 +00:00
Charles Hall
ae69da635b
allow overriding the attic endpoint
And also the public key so that pulling from the new endpoint will work.

This allows other people to host their own attic instances and configure
their (CI) environment to override the default endpoint so e.g. they can
take advantage of a binary cache without having write access to the
official one.

I didn't actually test this change but I think it should work.

Also why'd I format the script like that, ew lol
2024-03-05 15:06:52 -08:00
Charles Hall
d411e9037c
upload all devshell inputs to the cache
This will also include attic, so we don't need to explicitly do this
in `./bin/nix-build-and-cache` anymore, which is good because that
script gets called a good number of times and doing that repeatedly was
a bit of a waste.
2024-03-05 15:06:52 -08:00
Charles Hall
d5a9c6ac32
use nix-built binary to produce debian package
Currently just for `x86_64-unknown-linux-musl`. Theoretically, we can
use this same mechanism for `aarch64-unknown-linux-musl`. Practically,
I'm not sure just this will even work.
2024-03-05 15:06:52 -08:00
Charles Hall
4e09c9e58a
build all nix-based artifacts in a single job
This will reduce the amount of full builds that need to be done by runs
that don't have write access to the nix binary cache.
2024-03-05 15:06:52 -08:00
Charles Hall
6281c64c33
upgrade nixos/nix image 2024-03-05 15:06:51 -08:00
Charles Hall
4f352a711a
add trailing newline to file
Please fix your editor configuration...
2024-03-05 15:06:51 -08:00
Charles Hall
10b7b174b6
fix documented target triple
Even though it doesn't really matter because it's containerized anyway.
2024-03-05 15:06:51 -08:00
Charles Hall
e70f33741c
update flake.lock
Also switch names to match the newer upstream nixpkgs code.

Flake lock file updates:

• Updated input 'attic':
    'github:zhaofengli/attic/fbe252a5c21febbe920c025560cbd63b20e24f3b' (2024-01-18)
  → 'github:zhaofengli/attic/6eabc3f02fae3683bffab483e614bebfcd476b21' (2024-02-14)
• Updated input 'fenix':
    'github:nix-community/fenix/e132ea0eb0c799a2109a91688e499d7bf4962801' (2024-01-18)
  → 'github:nix-community/fenix/c8943ea9e98d41325ff57d4ec14736d330b321b2' (2024-03-05)
• Updated input 'fenix/rust-analyzer-src':
    'github:rust-lang/rust-analyzer/9d9b34354d2f13e33568c9c55b226dd014a146a0' (2024-01-17)
  → 'github:rust-lang/rust-analyzer/9f14343f9ee24f53f17492c5f9b653427e2ad15e' (2024-03-04)
• Updated input 'flake-utils':
    'github:numtide/flake-utils/1ef2e671c3b0c19053962c07dbda38332dcebf26' (2024-01-15)
  → 'github:numtide/flake-utils/d465f4819400de7c8d874d50b982301f28a84605' (2024-02-28)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/842d9d80cfd4560648c785f8a4e6f3b096790e19' (2024-01-17)
  → 'github:NixOS/nixpkgs/b8697e57f10292a6165a20f03d2f42920dfaf973' (2024-03-03)
2024-03-05 15:06:51 -08:00
Charles Hall
161ad8f9a4
update to latest crane before a regression
Once these issues are fixed, or at least just the one against crane, we
can go back to `ref=master`.

Flake lock file updates:

• Updated input 'crane':
    'github:ipetkov/crane/c798790eabec3e3da48190ae3698ac227aab770c' (2024-01-28)
  → 'github:ipetkov/crane/2c653e4478476a52c6aa3ac0495e4dea7449ea0e' (2024-02-11)
2024-03-05 15:06:51 -08:00
Timo Kösters
732d331847 Merge branch 'async-mutex-guards' into 'next'
refactor: use async-aware RwLocks and Mutexes where possible

See merge request famedly/conduit!595
2024-03-05 22:19:27 +00:00
Matthias Ahouansou
ee7efdd403
typo: as -> has 2024-03-05 20:31:40 +00:00
Matthias Ahouansou
07bb369c5c
perf: remove unnecessary async 2024-03-05 20:20:19 +00:00
Matthias Ahouansou
17dd8cb918
style: rename Sync(Mutex|RwLock) to Std(Mutex|RwLock) 2024-03-05 20:16:28 +00:00
Matthias Ahouansou
e33d8430d3
typo: colsures -> closures 2024-03-05 20:13:39 +00:00
Matthias Ahouansou
c58af8485d
revert: remove dependency on async_recursion 2024-03-05 19:59:24 +00:00
Matthias Ahouansou
becaad677f
refactor: use async-aware RwLocks and Mutexes where possible 2024-03-05 14:23:59 +00:00
Timo Kösters
57575b7c6f Merge branch 'dont-give-guests-admin' into 'next'
fix(accounts): don't give guests admin

See merge request famedly/conduit!591
2024-03-04 17:00:14 +00:00
Matthias Ahouansou
4934020ee7
style: remove unnecessary else block 2024-03-04 09:33:03 +00:00
Timo Kösters
7bb480ceb8 Merge branch 'readme' into 'next'
docs: small fixes for the README

See merge request famedly/conduit!592
2024-03-03 23:13:47 +00:00
Matthias Ahouansou
da5975d727 fix: avoid panics when admin room is not available 2024-03-03 22:42:24 +00:00
Timo Kösters
56a57d5489 docs: small fixes for the README 2024-03-03 15:56:03 +01:00
Matthias Ahouansou
e06e15d4ec
fix(accounts): don't give guests admin 2024-03-03 11:26:58 +00:00
Timo Kösters
18e684b92e Merge branch 'performance' into 'next'
Improvements to /sync performance and db size

See merge request famedly/conduit!590
2024-03-02 15:20:21 +00:00
Timo Kösters
a159fff08a
improvement: deactivate old presence code because it slows down sync
The problem is that for each sync, it creates a new rocksdb iterator for each room, and creating iterators is somewhat expensive
2024-02-29 10:31:25 +01:00
Timo Kösters
62dda7a43f
improvement: delete old rocksdb LOG files 2024-02-29 10:28:06 +01:00
Timo Kösters
99ab234f40
Merge branch 'fixes' into 'next'
Avoid panic when client is confused about rooms

See merge request famedly/conduit!588
2024-02-28 16:19:48 +00:00
Timo Kösters
e83416bb5a
Merge branch 'fixnginx' into 'next'
Fixed nginx proxy_pass directive

See merge request famedly/conduit!589
2024-02-28 16:09:55 +00:00
olly1240
726b6f0fa6
Fixed nginx proxy_pass directive 2024-02-28 16:38:06 +01:00
Timo Kösters
d7fd89df49
fix: avoid panic when client is confused about rooms 2024-02-28 16:31:41 +01:00
Timo Kösters
f4e57fdb22
Avoid federation when it is not necessary 2024-02-28 16:27:08 +01:00
Timo Kösters
4f096adcfa Merge branch 'bump-ruma' into 'next'
Bump ruma to latest commit

See merge request famedly/conduit!586
2024-02-25 19:35:54 +00:00
Matthias Ahouansou
21a5fa3ef0 refactor: use re-exported JsOption from ruma rather than directly adding it as a dependency 2024-02-25 10:30:30 +00:00
Matthias Ahouansou
b27e9ea95c chore: bump ruma to latest commit (as of 2024-02-25) 2024-02-25 08:49:20 +00:00
Matthias Ahouansou
8aa915acb9 bump ruma, support deprecated user login field 2024-02-23 20:29:17 +00:00
Matthias Ahouansou
ace9637bc2 replace unwraps with expects 2024-02-23 19:39:30 +00:00
Charles Hall
be1e2e9307 Merge branch 'ci/push-dockerhub' into 'next'
feat(ci): push oci-image to docker hub

See merge request famedly/conduit!584
2024-02-18 01:36:50 +00:00
Samuel Meenzen
1c6a4b1b24 feat(ci): push oci-image to docker hub 2024-02-18 01:36:50 +00:00
Matthias Ahouansou
976a73a0e5 style: appease rustfmt 2024-02-16 21:19:40 +00:00
Matthias Ahouansou
4c06f329c4 refactor: appease clippy 2024-02-16 21:13:59 +00:00
Matthias Ahouansou
d841b81c56 chore: update Cargo.lock 2024-02-16 20:52:19 +00:00
Matthias Ahouansou
e707084345 chore: bump ruma to latest commit (as of 2024-02-16) 2024-02-16 20:52:07 +00:00
strawberry
6dcc8b6cf1 bump ruma to latest commit (syncv3 JsOption and push optional power levels)
Signed-off-by: strawberry <strawberry@puppygock.gay>
2024-02-16 20:45:48 +00:00
strawberry
a2ac491c54 bump ruma, add wrong room keys error code, tiny logging change
can't update ruma to very latest commit because of the weird JsOption thing for syncv4 that i can't wrap my head around how to use, not important anyways

Signed-off-by: strawberry <strawberry@pupbrain.dev>
2024-02-16 20:45:27 +00:00
Charles Hall
72a13d8353 Merge branch 'flake-compat' into 'next'
support non-flake users

See merge request famedly/conduit!581
2024-02-02 03:32:41 +00:00
Raito Bezarius
3a63f9dfb6
feat: support non-flake users
This uses flakes-compat to read the `flake.nix` and expose it
to non-flake users.

Signed-off-by: Raito Bezarius <masterancpp@gmail.com>
2024-02-01 19:19:56 -08:00
Timo Kösters
f4f2d05b5b Merge branch 'update-ring' into 'next'
update ring to ^0.17

See merge request famedly/conduit!580
2024-02-01 10:45:28 +00:00
Charles Hall
c3c7bcb2ed Merge branch 'publish-oci-image' into 'next'
Publish oci image to the gitlab registry

See merge request famedly/conduit!570
2024-01-30 17:04:48 +00:00
Samuel Meenzen
d6c57f9b2e Publish oci image to the gitlab registry 2024-01-30 17:04:47 +00:00
Charles Hall
7fb9e99649
update ring and jsonwebtoken to remove ring ^0.16 2024-01-29 16:21:42 -08:00
Charles Hall
1274b48ebb
run cargo update
`IndexMap::remove` was deprecated in favor of explicitly named methods.
I assume that we actually needed to be using `shift_remove`, otherwise
we probably wouldn't be bothering with `indexmap` here in the first
place. I wonder if this fixes any bugs lol
2024-01-29 16:17:25 -08:00
Charles Hall
0a281e81a5 Merge branch 'fix-oci-image-cross' into 'next'
pass pkgsCrossStatic to mkOciImage, not pkgsHost

See merge request famedly/conduit!579
2024-01-29 23:50:56 +00:00
Charles Hall
a43bde69fa
pass pkgsCrossStatic to mkOciImage, not pkgsHost
This fixes a bug where the aarch64 OCI image had metadata saying it was
an x86_64 OCI image. On top of that, I think the metadata was actually
right (aside from Conduit's binary): since all other packages were being
pulled from `pkgsHost`, an OCI image cross compiled for aarch64 from a
different architecture would result in unexecutable binaries (e.g. tini)
since they were compiled for the completely wrong architecture.
2024-01-29 15:39:09 -08:00
Charles Hall
986343877c Merge branch 'artifact-links' into 'next'
update DEPLOY.md with new build links

See merge request famedly/conduit!578
2024-01-29 23:04:24 +00:00
Charles Hall
2d47710b55
update DEPLOY.md with new build links 2024-01-29 14:55:48 -08:00
Charles Hall
10542a1d70 Merge branch 'use-upstream-crane' into 'next'
switch crane input back to upstream

See merge request famedly/conduit!576
2024-01-28 21:54:38 +00:00
Charles Hall
c167f7a6ad
switch crane input back to upstream
Thanks to the crane maintainer to fixing my issue in a way that doesn't
suck, unlike my attempt in the fork we were briefly using.
2024-01-28 13:31:03 -08:00
Charles Hall
5787a70bab Merge branch 'fix-complement' into 'next'
make complement (mostly) work again

See merge request famedly/conduit!575
2024-01-28 03:21:04 +00:00
Charles Hall
cf8f1f2546
make a bunch of changes so complement works again
Well, kinda. It crashed on me after 10 minutes because the tests timed
out like in <https://github.com/matrix-org/complement/issues/394>.
Sounds like this means it's a them problem though.

I want to use Nix to build this image instead in the future but this
will at least make it work for now and give me a reference for while I'm
porting it. I also want to make Conduit natively understand Complement's
requirements instead of `sed`ing a bunch of stuff and needing a reverse
proxy in the container. Should be more reliable that way.

I'm not making this run in CI until the above stuff is addressed and
until I can decide on a way to pin the revision of Complement being
tested against.
2024-01-27 18:09:43 -08:00
Charles Hall
3c2fc4a4c6 Merge branch 'oci-image-ca-certs' into 'next'
add ca certificates to the OCI image

See merge request famedly/conduit!574
2024-01-27 20:37:21 +00:00
Charles Hall
dffd771e7c
add ca certificates to the OCI image
Without this, checking the authority of TLS certificates fails, making
Conduit (rightly) refuse to connect to anything.
2024-01-27 12:25:06 -08:00
Charles Hall
4da8c7e282 Merge branch 'docker-tag' into 'next'
change docker tag back to `next`

See merge request famedly/conduit!573
2024-01-27 05:55:08 +00:00
Charles Hall
0df5d18fd6
change docker tag back to next
I misunderstood what the general meaning of the `latest` tag was.
2024-01-26 21:38:13 -08:00
Charles Hall
825ceac1c3 Merge branch 'update-rust' into 'next'
update rust toolchain

See merge request famedly/conduit!572
2024-01-26 18:27:47 +00:00
Charles Hall
3e389256f5
switch lint config to manifest-lint feature
I removed some lint configuration in the process:

* `#[allow(clippy::suspicious_else_formatting)]` because nothing is
  currently triggering it.
* `#[warn(clippy::future_not_send)]` because some stuff under
  `src/lib.rs` is. And also like, auto-trait leakage generally means
  this isn't a problem, and if things really need to be `Send`, then
  you'll probably know to mark it manually.
* `#[warn(rust_2018_idioms)]` and replaced it with
  `explicit-outlives-requirements = "warn"` which is the most useful
  lint in that group that isn't enabled by default.
2024-01-26 01:03:55 -08:00
Charles Hall
a7892a28ec
refer directly to stdenv since it's in scope 2024-01-25 22:00:32 -08:00
Charles Hall
9453dbc740
update rust toolchain
It comes with a bunch of new lints (yay!) so I fixed them all so CI will
keep working.

Also apparently something about linking changed because I had to change
the checks for deciding the linker flags for static x86_64 builds to
keep working.
2024-01-25 21:44:40 -08:00
Charles Hall
bf48c10d28 Merge branch 'cross' into 'next'
cross compile static binaries for x86_64 and aarch64

See merge request famedly/conduit!569
2024-01-26 04:12:15 +00:00
Charles Hall
7c1a3e41d9
add package to build an aarch64 oci image
And build it as an artifact in CI.
2024-01-25 20:03:29 -08:00
Charles Hall
2a04a361e0
break oci image builder into a function
Now it can be reused for different `pkgs` and `package`s.
2024-01-25 20:03:25 -08:00
Charles Hall
0e8e4f1083
add static cross to aarch64-unknown-linux-musl 2024-01-25 19:44:06 -08:00
Charles Hall
81ae579b25
add static cross to x86_64-unknown-linux-musl 2024-01-25 19:43:23 -08:00
Charles Hall
3a3cafe912
preempt cross problems by using my crane fork
I imagine this will get fixed/merged upstream in the near future.
2024-01-25 11:39:17 -08:00
Charles Hall
d29591d47d
group packages in attrset literal
This will make generating packages for cross possible.
2024-01-25 11:39:17 -08:00
Charles Hall
67d280dd2e
factor package expression into a function
We'll need to call it repeatedly to make packages for cross.
2024-01-25 11:39:17 -08:00
Charles Hall
3ac9be5a78
add x86_64-unknown-linux-gnu
This is probably the most common target and usually doesn't involve
cross compilation.
2024-01-25 11:39:17 -08:00
Charles Hall
52954f7a11
use fromToolchainFile
I *think* this will make it easier to pull in extra rustc targets.
2024-01-25 11:39:16 -08:00
Charles Hall
692a31620d
make let bindings take pkgs as an argument
Again, will make cross compilation easier to set up.
2024-01-25 11:39:16 -08:00
Charles Hall
cf4015b830
rename pkgs to pkgsHost
This will make organizing cross compilation easier.
2024-01-25 11:39:16 -08:00
Charles Hall
9cef03127b
remove with for nativeBuildInputs
It's going to get more involved and that `with` was too specific.
2024-01-25 11:39:16 -08:00
Charles Hall
249fc7769d
don't bother with mold
For now, at least. I suspect it will make cross compilation more
difficult.
2024-01-25 11:39:16 -08:00
Charles Hall
5cc53c9e14
push oci image and x86_64-*-gnu build to bin cache
This will allow most Nix users to use the `default` package and without
having to build from source. And also allows any weirdos to get the OCI
image from the Nix binary cache if they want. No idea why that would be
desireable though lol
2024-01-25 11:37:35 -08:00
Charles Hall
bdc46f6392
add script to build and push to binary cache
This is even useful for local development, as you can pre-populate the
binary cache before running CI (assuming you have the token). Also, it
being in a script makes it easier to test.

We've added attic as a flake input even though the flake itself doesn't
use it so that we can use `--inputs-from .` in Nix commands to reference
a locked version of attic. This helps with reproducibility and caching,
and to makes it easy to update attic because it's part of the normal
flake lifecycle.
2024-01-25 11:34:46 -08:00
Charles Hall
6ae776218c
add our own binary cache
The machine I'm hosting this on doesn't have incredible upload speeds
but it should be good enough?
2024-01-25 10:51:21 -08:00
Charles Hall
bd2b146d5d
add crane binary cache
This way we don't need to build e.g. crane-utils every time.
2024-01-24 23:25:48 -08:00
Charles Hall
f7cc4fb3bb
state artifacts' targets and rename artifacts
This will make it more obvious what's what and be more internally
consistent.
2024-01-24 23:25:42 -08:00
Charles Hall
ca198c51fa Merge branch 'reqwest-follow-up' into 'next'
move resolver logic into the resolver

See merge request famedly/conduit!571
2024-01-24 23:24:00 +00:00
Charles Hall
fe86d28428
move resolver logic into the resolver
Honestly not sure why it wasn't done like this before. This code is much
less awkward to follow and more compartmentalized.

These changes were mainly motivated by a clippy lint triggering on the
original code, which then made me wonder if I could get rid of some of
the `Box`ing. Turns out I could, and this is the result of that.
2024-01-24 15:11:17 -08:00
Timo Kösters
c86f9a5c5b Merge branch 'pr_upstream_reqwest' into 'next'
Use upstream `reqwest` instead of vendored one

See merge request famedly/conduit!527
2024-01-24 21:59:34 +00:00
Timo Kösters
e0358a9de5 Merge branch 'send_push_to_invited_user' into 'next'
feat: send push notification on invite to invited user and etc

Closes #399

See merge request famedly/conduit!559
2024-01-24 17:55:22 +00:00
Tobias Bucher
69d0003222 Use upstream reqwest instead of vendored one
This uses the `ClientBuilder::dns_resolver` function that was added in
reqwest 0.11.13, instead of the homebrew `ClientBuilder::resolve_fn`.
2024-01-24 17:12:43 +01:00
Timo Kösters
5cf9f3df48 Merge branch 'lints' into 'next'
fix CI

See merge request famedly/conduit!564
2024-01-24 15:42:20 +00:00
Charles Hall
0b7ed5adc9
add debian package building in ci
This uses a separate step and docker image, which I'm not a huge fan of.
At least I could get this to work for now, but I won't be shocked when
it breaks later. I know, I know, fixing this kind of problem is the
exact reason I bothered to do this, but I was really struggling to do
better here. Maybe I can take a second pass at this later.

Also, this explicitly names the caches, because without this, various
things related to linking will break.
2024-01-24 07:22:37 -08:00
Charles Hall
4de54db305
redo docker image and build it in ci 2024-01-24 07:22:37 -08:00
Charles Hall
02781e4f9b
use nix-filter to filter sources
This prevents nix from rebuilding conduit when files that don't actually
effect the build are changed.
2024-01-24 07:22:37 -08:00
Charles Hall
f8bdfd82b0
update flake.lock
Flake lock file updates:

• Updated input 'crane':
    'github:ipetkov/crane/8b08e96c9af8c6e3a2b69af5a7fa168750fcf88e' (2023-07-07)
  → 'github:ipetkov/crane/742170d82cd65c925dcddc5c3d6185699fbbad08' (2024-01-18)
• Removed input 'crane/flake-compat'
• Removed input 'crane/flake-utils'
• Removed input 'crane/rust-overlay'
• Removed input 'crane/rust-overlay/flake-utils'
• Removed input 'crane/rust-overlay/nixpkgs'
• Updated input 'fenix':
    'github:nix-community/fenix/39096fe3f379036ff4a5fa198950b8e79defe939' (2023-07-16)
  → 'github:nix-community/fenix/e132ea0eb0c799a2109a91688e499d7bf4962801' (2024-01-18)
• Updated input 'fenix/rust-analyzer-src':
    'github:rust-lang/rust-analyzer/996e054f1eb1dbfc8455ecabff0f6ff22ba7f7c8' (2023-07-15)
  → 'github:rust-lang/rust-analyzer/9d9b34354d2f13e33568c9c55b226dd014a146a0' (2024-01-17)
• Updated input 'flake-utils':
    'github:numtide/flake-utils/919d646de7be200f3bf08cb76ae1f09402b6f9b4' (2023-07-11)
  → 'github:numtide/flake-utils/1ef2e671c3b0c19053962c07dbda38332dcebf26' (2024-01-15)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/8acef304efe70152463a6399f73e636bcc363813' (2023-07-15)
  → 'github:NixOS/nixpkgs/842d9d80cfd4560648c785f8a4e6f3b096790e19' (2024-01-17)
2024-01-24 07:22:37 -08:00
Charles Hall
7e66d2e2c0
use nix and engage to manage ci 2024-01-24 07:22:37 -08:00
Charles Hall
ffd03a256b
remove workflow rules
I don't think these are actually necessary? At least in my own testing,
I haven't seen duplicate pipelines for a single commit when converting
from just a branch to a merge request.
2024-01-24 07:22:37 -08:00
Charles Hall
9d592d60d2
remove dockerlint step because it does nothing
It's configured to let the pipeline pass even if there are warnings or
errors, i.e. it's pointless.
2024-01-24 07:22:37 -08:00
Charles Hall
25ceb5ebd8
remove commented out ci step
If you want it back, just look at the git history.
2024-01-24 07:22:37 -08:00
Charles Hall
6f052fff98
improve nix flake
Also fix the comment in `Cargo.toml` about the rust-version stuff.
2024-01-24 07:22:37 -08:00
Charles Hall
e8ac881b2f
add an engage file
See <https://charles.page.computer.surgery/engage/> for info.
2024-01-24 07:22:37 -08:00
Charles Hall
0d17aedae5
fix cargo doc lints
Rustdoc (rightfully) thought the `[commandbody]` "tags" were broken
links, so I've just made them links to nothing instead.
2024-01-24 07:22:37 -08:00
Charles Hall
ab1fff2642
fix cargo clippy lints 2024-01-24 07:22:37 -08:00
Charles Hall
92c5b6b86c
fix cargo check lints 2024-01-24 07:22:25 -08:00
Charles Hall
dc2f53e773
comment out heed backend things
The code in conduit doesn't compile.
2024-01-18 12:27:48 -08:00
Timo Kösters
2475995102 Merge branch 'conduit/declare-1.5-support' into 'next'
declare 1.5 support

See merge request famedly/conduit!568
2024-01-17 17:56:21 +00:00
Charles Hall
835f4ad8cf
declare 1.5 support 2024-01-16 13:52:56 -08:00
Val Lorentz
8175bc1246 Explicitly match RoomVersionId::V11 2023-12-24 19:04:48 +01:00
Val Lorentz
eb7ac91cd5 Reuse existing get_room_version 2023-12-24 19:02:03 +01:00
Timo Kösters
ca6219723b Merge branch 'conduit/envrc-shebang' into 'next'
add shebang to .envrc

See merge request famedly/conduit!566
2023-12-24 15:36:52 +00:00
Timo Kösters
40c7c248fb Merge branch 'conduit/rm-presence-panic' into 'next'
don't panic on missing presence status for a user

See merge request famedly/conduit!565
2023-12-24 15:36:36 +00:00
Charles Hall
8f3f5c01f9
add shebang to .envrc
All this really does is make syntax highlighting and shellcheck work by
default in more editors.
2023-12-23 22:21:19 -08:00
Charles Hall
9d7f7b871b
don't panic on missing presence status for a user 2023-12-23 21:01:01 -08:00
Timo Kösters
30f0871e21 Merge branch 'sendjoin-signature-error' into 'next'
Log underlying error when rejecting sendjoin response

See merge request famedly/conduit!563
2023-12-11 15:11:42 +00:00
Val Lorentz
98e81c6217 Log underlying error when rejecting sendjoin response 2023-12-03 19:38:09 +01:00
Val Lorentz
5a7bb1e8f1 Return error instead of panic when first event is not m.room.create 2023-12-02 17:51:19 +01:00
Val Lorentz
520806d413 Use Ruma's redact_content_in_place instead of custom implementation 2023-12-01 19:12:23 +01:00
Val Lorentz
9646439a94 Enable support for room v11 2023-12-01 18:29:15 +01:00
Val Lorentz
fac995036a create_hash_and_sign_event: Use actual version of RoomCreate events, instead of the default 2023-12-01 18:29:15 +01:00
Val Lorentz
18bfd79ef2 Remove "creator" key when upgrading rooms to v11 2023-12-01 18:28:51 +01:00
Val Lorentz
a3b8eea9b4 Move "redacts" key to "content" in redaction events in v11 rooms 2023-12-01 18:28:51 +01:00
Val Lorentz
d39d30008a Remove "creator" property from rooms >= v11 2023-12-01 15:11:26 +01:00
AndSDev
f3b6b3e222 feat: send push notification on invite to invited user and etc 2023-11-07 12:46:53 +00:00
Timo Kösters
3bfdae795d Merge branch 'sliding' into 'next'
Sliding sync improvements and redaction fixes

See merge request famedly/conduit!549
2023-09-13 18:57:57 +00:00
Timo Kösters
75c80df271
Sliding sync improvements and redaction fixes 2023-09-13 20:54:53 +02:00
Timo Kösters
094cb888d4 Merge branch 'badacl' into 'next'
fix: ACL error shouldn't break the whole request

See merge request famedly/conduit!542
2023-09-13 18:46:03 +00:00
Timo Kösters
fa725a14e2 Merge branch 'lukehmcc-next-patch-37096' into 'next'
Update README.md to fix typo & fix compatibility with new versions of docker compose

See merge request famedly/conduit!545
2023-09-11 18:34:06 +00:00
Luke McCarthy
9b3664aeeb Update README.md to fix typo & fix compatibility with new versions of docker compose 2023-08-27 02:14:03 +00:00
Jonas Zohren
90fea00dc7 Merge branch 'docs-docker-coturn' into 'next'
Docs: coturn instructions for docker

See merge request famedly/conduit!498
2023-08-23 14:18:42 +00:00
Jonas Zohren
20924a44f1 Suggestion on how to generate a secure key 2023-08-23 11:17:47 +02:00
purplemeteorite
38d6426b0e coturn setup instructions for docker 2023-08-23 11:09:21 +02:00
Timo Kösters
9b55ce933a
Back off from more events, don't retry auth events 2023-08-12 09:53:32 +02:00
Timo Kösters
f73a657a23
fix: ACL error shouldn't break the whole request 2023-08-11 20:29:22 +02:00
Timo Kösters
6dfb262ddf Merge branch 'patch-3' into 'next'
log handling previous event time as debug

See merge request famedly/conduit!540
2023-08-11 09:27:42 +00:00
Timo Kösters
75cdc3a1f6 Merge branch 'roomversionwarnings' into 'next'
Do not show "Invalid room version" errors when server is not in room

See merge request famedly/conduit!541
2023-08-11 09:27:23 +00:00
Timo Kösters
11103a92ed
Do not show "Invalid room version" errors when server is not in room 2023-08-11 10:48:48 +02:00
girlbossceo
ce2017a10e log handling previous event time as debug
Signed-off-by: girlbossceo <june@girlboss.ceo>
2023-08-10 23:12:37 +00:00
Timo Kösters
0c2cfda3ae Merge branch 'next' into 'master'
Merge remote-tracking branch 'origin/next'

See merge request famedly/conduit!538
2023-08-10 17:01:56 +00:00
Timo Kösters
4bf8ee1f74 Merge branch 'nextversion' into 'next'
Bump version to v0.6.0

See merge request famedly/conduit!537
2023-08-10 16:58:47 +00:00
Timo Kösters
5d16948030
Bump version to v0.6.0 2023-08-10 18:57:52 +02:00
Timo Kösters
b7b2eb9d05 Merge branch 'trust' into 'next'
improvement: matrix.org is default trusted server if unspecified

See merge request famedly/conduit!536
2023-08-10 15:50:16 +00:00
Timo Kösters
19bfee1835
improvement: matrix.org is default trusted server if unspecified 2023-08-10 17:45:58 +02:00
Timo Kösters
9db87550fd Merge branch 'admincommands' into 'next'
improvement: more forgiving admin command syntax

See merge request famedly/conduit!535
2023-08-10 15:36:29 +00:00
Timo Kösters
606b25b9e7
improvement: more forgiving admin command syntax 2023-08-10 17:26:55 +02:00
Timo Kösters
fd9e52a559
More sanity checks 2023-08-10 11:45:31 +02:00
Timo Kösters
0a0f227601 Merge branch 'registrationtokens' into 'next'
Registrationtokens

Closes #372

See merge request famedly/conduit!533
2023-08-09 20:27:19 +00:00
Timo Kösters
183558150d
fix: don't show removed rooms in space 2023-08-09 22:21:21 +02:00
Timo Kösters
c028e0553c
feat: registration tokens 2023-08-09 18:27:30 +02:00
Timo Kösters
2581f7a10b Merge branch 'fix-broken-links' into 'next'
Docs: Fix broken links in docker documentation

See merge request famedly/conduit!520
2023-08-09 07:55:49 +00:00
Timo Kösters
3e518773e2 Merge branch 'improvements' into 'next'
cross signing fixes

See merge request famedly/conduit!532
2023-08-07 16:11:11 +00:00
Timo Kösters
888f7e4403 Merge branch 'more-logging-enrichment' into 'next'
Slightly more logging improvements

See merge request famedly/conduit!530
2023-08-07 16:04:12 +00:00
Timo Kösters
d82c26f0a9
Avatars for sliding sync DMs 2023-08-07 17:54:08 +02:00
Timo Kösters
c1e2ffc0cd
improvement: maybe cross signing really works now 2023-08-07 13:55:44 +02:00
June
06fccbc340 debug log before and after nofile soft limit increases
Signed-off-by: June <june@girlboss.ceo>
2023-08-03 14:51:39 -10:00
girlbossceo
fbd8090b0b log room ID for invalid room topic event errors
Signed-off-by: girlbossceo <june@girlboss.ceo>
2023-08-03 08:54:47 -10:00
Timo Kösters
06ab707c79 Merge branch 'rusqlite-update' into 'next'
bump rusqlite to 0.29.0

See merge request famedly/conduit!529
2023-08-02 05:06:54 +00:00
Timo Kösters
174a580319 Merge branch 'base64-update' into 'next'
update base64 to 0.21.2

See merge request famedly/conduit!528
2023-08-02 05:06:24 +00:00
June
fbb256dd91 bump rusqlite to 0.29.0
Signed-off-by: June <june@girlboss.ceo>
2023-08-01 15:09:55 -10:00
June
5a7bade476 update base64 to 0.21.2
Signed-off-by: June <june@girlboss.ceo>
2023-08-01 14:48:50 -10:00
Timo Kösters
d2bfcb018e Merge branch 'error-leak-fix' into 'next'
sanitise potentially sensitive errors

See merge request famedly/conduit!523
2023-08-01 11:25:06 +00:00
Timo Kösters
08f0f17ff7 Merge branch 'msteenhagen-next-patch-18830' into 'next'
Removed ambiguity in deploy.md

See merge request famedly/conduit!525
2023-08-01 11:23:47 +00:00
Timo Kösters
57b86f1130 Merge branch 'msteenhagen-next-patch-22350' into 'next'
Correct option error adduser in DEPLOY.md

See merge request famedly/conduit!526
2023-08-01 11:23:28 +00:00
Maarten Steenhagen
3a6eee7019 Correct option error adduser in DEPLOY.md 2023-08-01 11:03:31 +00:00
Maarten Steenhagen
9ce1cad983 Changed 'right' to 'appropriate' to avoid ambiguity (original could be read as right-hand-side) 2023-08-01 10:58:07 +00:00
Timo Kösters
10da9485a5 Merge branch 'threads' into 'next'
fix: threads get updated properly

See merge request famedly/conduit!524
2023-07-31 14:24:11 +00:00
Timo Kösters
acfe381dd3
fix: threads get updated properly
Workaround for element web while waiting for https://github.com/matrix-org/matrix-js-sdk/pull/3635
2023-07-31 16:18:23 +02:00
girlbossceo
83805c66e5 sanitise potentially sensitive errors
prevents errors like DB or I/O errors from leaking filesystem paths

Co-authored-by: infamous <ehuff007@gmail.com>
Signed-off-by: girlbossceo <june@girlboss.ceo>
2023-07-30 17:30:16 +00:00
Timo Kösters
afd8112e25 Merge branch 'spaces' into 'next'
Automatic update checker

See merge request famedly/conduit!522
2023-07-29 19:55:51 +00:00
Timo Kösters
b8c164dc60
feat: version checker 2023-07-29 21:53:57 +02:00
Timo Kösters
0453a72890 Merge branch 'patch-1' into 'next'
fix: s/ok_or/ok_or_else in relevant places

See merge request famedly/conduit!521
2023-07-29 19:19:05 +00:00
girlbossceo
e2c914cc11 fix: s/ok_or/ok_or_else in relevant places
Signed-off-by: girlbossceo <june@girlboss.ceo>
2023-07-29 19:17:12 +00:00
Timo Kösters
da907451e7
Admin commands to sign and verify jsons 2023-07-29 20:00:12 +02:00
Timo Kösters
2b4a6c96ee Merge branch 'small-logging-improvements' into 'next'
Slight logging improvements

See merge request famedly/conduit!517
2023-07-29 15:00:42 +00:00
girlbossceo
d7061e6984 cargo fmt
Signed-off-by: girlbossceo <june@girlboss.ceo>
2023-07-29 14:30:48 +00:00
girlbossceo
3494d7759e Return "Hello from Conduit!" on the / route
akin to Synapes's "It works!" page, removing an unnecessary warning
about / route being unknown

Signed-off-by: girlbossceo <june@girlboss.ceo>
2023-07-29 14:29:26 +00:00
girlbossceo
cc5dcceacc Log the room ID, event ID, PDU, and event type where possible
Signed-off-by: girlbossceo <june@girlboss.ceo>
2023-07-29 14:29:26 +00:00
girlbossceo
863103450c Log the unknown login type in warning level
Signed-off-by: girlbossceo <june@girlboss.ceo>
2023-07-29 14:29:26 +00:00
girlbossceo
a0148a9996 Print relevant room ID and ACL'd server in informational level
These are room ACLs, not server ACLs. Causes confusion where people
think their Conduit homeserver was ACL'd. Print where these are coming from
in informational level.

Signed-off-by: girlbossceo <june@girlboss.ceo>
2023-07-29 14:29:26 +00:00
girlbossceo
1f867a2c86 Only print raw malformed JSON body in debug level
Signed-off-by: girlbossceo <june@girlboss.ceo>
2023-07-29 14:29:26 +00:00
purplemeteorite
c0a2acb869 Merge branch 'next' into 'fix-broken-links'
# Conflicts resolved:
#   docker/README.md
2023-07-29 08:02:56 +00:00
Timo Kösters
97835541ce Merge branch 'uak-next-patch-77212' into 'next'
Change link from docker-compose.override.traefik.yml to docker-compose.override.yml in README.md

See merge request famedly/conduit!514
2023-07-29 07:13:58 +00:00
purplemeteorite
081cc66eda fixed broken traefik links in docker README 2023-07-29 08:26:34 +02:00
purplemeteorite
7489e2c4f6 moved docker-compose.yml into the docker folder 2023-07-29 08:23:17 +02:00
Timo Kösters
1e675dbb68 Merge branch 'next' into 'next'
Docs: OCI image registries and tags

See merge request famedly/conduit!492
2023-07-28 19:34:42 +00:00
Timo Kösters
f4c1748ab1 Merge branch 'bugfix/well-known-missing' into 'next'
It's ok not being able to find a .well-known response.

See merge request famedly/conduit!519
2023-07-28 15:45:59 +00:00
Tobias Tom
7990822f72 It's ok not being able to find a .well-known response. 2023-07-28 16:26:40 +01:00
Timo Kösters
2a100412fa Merge branch 'relax-rocksdb' into 'next'
relax recovery mode

See merge request famedly/conduit!516
2023-07-27 06:12:31 +00:00
Timo Kösters
3e7652909b Merge branch 'maximize-fd-limit' into 'next'
maximize fd limit

See merge request famedly/conduit!515
2023-07-27 06:11:05 +00:00
Charles Hall
9fb8498067
relax recovery mode 2023-07-26 15:32:36 -07:00
Charles Hall
291290db92
maximize fd limit 2023-07-26 13:24:44 -07:00
uak
54a115caf3 Change link from docker-compose.override.traefik.yml to docker-compose.override.yml in README.md 2023-07-26 18:53:19 +00:00
Timo Kösters
81866170f0 Merge branch 'spaces' into 'next'
fix: spaces with restricted rooms

See merge request famedly/conduit!513
2023-07-26 06:40:00 +00:00
Timo Kösters
bf46829595
fix: spaces with restricted rooms 2023-07-26 08:34:12 +02:00
Timo Kösters
9f14ad7125 Merge branch 'sync-up-debian-packaging' into 'next'
Sync up Debian packaging

See merge request famedly/conduit!510
2023-07-24 17:10:45 +00:00
Timo Kösters
90a10c84ef Merge branch 'slidingfixes' into 'next'
Better sliding sync

See merge request famedly/conduit!511
2023-07-24 08:48:27 +00:00
Timo Kösters
d220641d64
Sliding sync subscriptions, e2ee, to_device messages 2023-07-24 10:42:52 +02:00
Timo Kösters
caddc656fb
slightly better sliding sync 2023-07-24 10:42:47 +02:00
Paul van Tilburg
b1a591a06c
Also create the conduit (system) group
The `chown` command mentioned later in `DEPLOY.md` needs this group to
exist. Also make sure this account cannot be used to login with by
disabling its password and its shell.

This is similar to how the Debian `postinst` script does this.
2023-07-23 12:53:43 +02:00
Paul van Tilburg
3cd3d0e0ff
Add section about how to download/install/deploy
This refers to `DEPLOY.md` as to not duplicate the information.
2023-07-23 12:53:36 +02:00
Paul van Tilburg
433dad6ac2
Turn README.Debian into a markdown file
It is common to have a markdown file per deployment subdirectory.
Still install it as `README.Debian` to `/usr/share/doc/matrix-conduit`
as per Debian policy.

Also update the link in the main `README.md` file.
2023-07-23 12:37:38 +02:00
Paul van Tilburg
8cf408e966
Fix up permissions of the database path
Also apply the database creation and ownership change on every
installation and upgrade.
2023-07-23 12:37:38 +02:00
Timo Kösters
1e560529d8 Merge branch 'nix-upkeep' into 'next'
Nix upkeep

See merge request famedly/conduit!505
2023-07-23 09:23:41 +00:00
Timo Kösters
ff98444d03 Merge branch 'nogroup' into 'next'
[Security fix] Create dedicated user group

See merge request famedly/conduit!509
2023-07-23 09:22:39 +00:00
x4u
82f31d6b72 Replace nogroup with dedicated user group 2023-07-23 14:21:36 +08:00
Charles Hall
6ae5143ff5
only listen on IPv6 since that's what conduit does 2023-07-21 12:12:37 -07:00
purplemeteorite
bd8fec3836 changed registry options
1. Recommended GitLab's own registry over Docker Hub. (Reason: https://gitlab.com/famedly/conduit/-/merge_requests/492#note_1457220261)
2. Added the development image :next to the list of options.
3. Displayed text for Docker Hub now contains "docker.io" as part of the link for easier copy-paste for podman users. Clicking on the link still takes to the website.
2023-07-21 20:33:32 +02:00
Charles Hall
742331e054
Revert "only use musl on x86_64"
This reverts commit 56f0f3dfa4.

This shouldn't be needed anymore since [this][0] reached nixos-unstable.

[0]: https://github.com/NixOS/nixpkgs/pull/242889
2023-07-16 13:48:05 -07:00
Charles Hall
abd8e1bf54
nixpkgs' rocksdb is now new enough :)
This reverts commit abd0a014e8.
2023-07-16 13:47:42 -07:00
Charles Hall
fa3b1fd9bd
update flake.lock
Flake lock file updates:

• Updated input 'crane':
    'github:ipetkov/crane/75f7d715f8088f741be9981405f6444e2d49efdd' (2023-06-13)
  → 'github:ipetkov/crane/8b08e96c9af8c6e3a2b69af5a7fa168750fcf88e' (2023-07-07)
• Updated input 'crane/rust-overlay':
    'github:oxalica/rust-overlay/c535b4f3327910c96dcf21851bbdd074d0760290' (2023-06-03)
  → 'github:oxalica/rust-overlay/f9b92316727af9e6c7fee4a761242f7f46880329' (2023-07-03)
• Updated input 'fenix':
    'github:nix-community/fenix/df0a6e4ec44b4a276acfa5a96d2a83cb2dfdc791' (2023-06-17)
  → 'github:nix-community/fenix/39096fe3f379036ff4a5fa198950b8e79defe939' (2023-07-16)
• Updated input 'fenix/rust-analyzer-src':
    'github:rust-lang/rust-analyzer/a5a71c75e62a0eaa1b42a376f7cf3d348cb5dec6' (2023-06-16)
  → 'github:rust-lang/rust-analyzer/996e054f1eb1dbfc8455ecabff0f6ff22ba7f7c8' (2023-07-15)
• Updated input 'flake-utils':
    'github:numtide/flake-utils/a1720a10a6cfe8234c0e93907ffe81be440f4cef' (2023-05-31)
  → 'github:numtide/flake-utils/919d646de7be200f3bf08cb76ae1f09402b6f9b4' (2023-07-11)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/04af42f3b31dba0ef742d254456dc4c14eedac86' (2023-06-17)
  → 'github:NixOS/nixpkgs/8acef304efe70152463a6399f73e636bcc363813' (2023-07-15)
2023-07-16 13:37:40 -07:00
Timo Kösters
e9946f81a0 Merge branch 'e2eefed' into 'next'
fix: e2ee over federation

See merge request famedly/conduit!504
2023-07-16 14:54:44 +00:00
Timo Kösters
a9ba067e77
fix: e2ee over federation 2023-07-16 16:50:03 +02:00
Timo Kösters
706148f941 Merge branch 'nhekobug' into 'next'
fix: nheko e2ee verification bug

See merge request famedly/conduit!503
2023-07-15 21:46:12 +00:00
Timo Kösters
24402312c5
fix: could not verify own events 2023-07-15 23:43:25 +02:00
Jonas Zohren
17180a3e08 capitalize names 2023-07-13 16:54:56 +00:00
Timo Kösters
3c6ffd88bf Merge branch 'unbreak-aarch64-nix' into 'next'
only use musl on x86_64

See merge request famedly/conduit!502
2023-07-11 14:11:45 +00:00
Timo Kösters
c3966f501c
fix: nheko e2ee verification bug 2023-07-10 23:10:27 +02:00
Charles Hall
56f0f3dfa4
only use musl on x86_64
Since that's all I've tested it on. Apparently this caused issues on
aarch64 even though it allegedly shouldn't.
2023-07-10 11:06:19 -07:00
Timo Kösters
ad06d475de Merge branch 'sliding' into 'next'
Very basic Element X support and fixes

See merge request famedly/conduit!501
2023-07-10 14:35:35 +00:00
Timo Kösters
0b4e3de9c0
fix: spaces with restricted rooms 2023-07-10 16:28:08 +02:00
Timo Kösters
edd4a3733f
fix: actually clear memory in the admin commands 2023-07-10 16:27:42 +02:00
Timo Kösters
c17187777f
fix: never try federation with self 2023-07-10 16:26:36 +02:00
Timo Kösters
78e7b711df
fix: better sliding sync 2023-07-10 16:25:33 +02:00
Timo Kösters
4b7d3e24dd
bump ruma 2023-07-10 16:24:57 +02:00
Timo Kösters
e4f769963f
feat: very simple sliding sync implementation 2023-07-06 10:32:25 +02:00
Jonas Zohren
eab5dac6e8 Merge branch 'fix-docker-build-image-size' into 'next'
ci: Fix "0 B" image size display

See merge request famedly/conduit!499
2023-07-04 21:18:25 +00:00
Jonas Zohren
c4824a6ebc ci: Fix "0 B" image size display
works around gitlab issue https://gitlab.com/gitlab-org/gitlab/-/issues/388865#workaround
2023-07-04 21:13:11 +00:00
Timo Kösters
f8a36e7554 Merge branch 'memory' into 'next'
improvement: better memory usage and admin commands to analyze it

See merge request famedly/conduit!497
2023-07-03 17:43:27 +00:00
Timo Kösters
a2c3256ced
improvement: better memory usage and admin commands to analyze it 2023-07-03 19:41:07 +02:00
Timo Kösters
833c1505f1 Merge branch 'hierarchy' into 'next'
feat: space hierarchies

See merge request famedly/conduit!495
2023-07-03 13:56:47 +00:00
Timo Kösters
bac13d08ae
fix: cache invalidation 2023-07-02 22:50:50 +02:00
Timo Kösters
f0a27dcb00 Merge branch 'next' into 'next'
update example configurations in DEPLOY.md for Apache and Nginx which include...

See merge request famedly/conduit!493
2023-07-02 20:20:31 +00:00
Timo Kösters
9d49d599f3
feat: space hierarchies 2023-07-02 22:12:06 +02:00
Jacob Taylor
2640f67e4b remove comments 2023-07-02 18:00:30 +00:00
Timo Kösters
eb8bc1af8d Merge branch 'jplatte/axum06' into 'next'
Upgrade axum to 0.6

See merge request famedly/conduit!494
2023-07-02 07:02:04 +00:00
Jonas Platte
0ded637b4a
Upgrade axum to 0.6 2023-06-29 11:20:52 +02:00
Jacob Taylor
dc50197a13 update example configurations in DEPLOY.md for Apache and Nginx which include upstream proxy timeouts of 5 minutes to allow for room joins which take a while 2023-06-29 02:42:32 +00:00
purplemeteorite
06a1321e56 easier-to-read docker setup instructions 2023-06-28 18:51:44 +02:00
Timo Kösters
6a6f8e80f1 Merge branch 'joinfix' into 'next'
improvement: randomize server order for alias joins

See merge request famedly/conduit!491
2023-06-28 15:47:36 +00:00
Timo Kösters
fd1ccbd3ad
improvement: randomize server order for alias joins 2023-06-28 17:44:30 +02:00
Timo Kösters
3a1a72df98 Merge branch 'stateres' into 'next'
Make state resolution more resistant and some sync performance improvements

See merge request famedly/conduit!490
2023-06-28 10:46:32 +00:00
Timo Kösters
84784970b2 Merge branch 'fix/docker-ci-pipeline' into 'next'
ci: Adjust to current docker

See merge request famedly/conduit!488
2023-06-27 18:54:40 +00:00
Timo Kösters
d64a56d88b
Do soft fail check before doing state res to allow leave events 2023-06-27 18:48:34 +02:00
Timo Kösters
be877ef719
Improve sync performance with more caching and wrapping things in Arcs to avoid copies 2023-06-27 13:15:11 +02:00
Timo Kösters
7c6d25dcd1
Do state res even if the event soft fails 2023-06-27 13:13:33 +02:00
Timo Kösters
b671238aa0 Merge branch 'rumafix' into 'next'
bump ruma

See merge request famedly/conduit!489
2023-06-26 21:11:20 +00:00
Timo Kösters
91180e011d
bump ruma 2023-06-26 23:10:26 +02:00
Jonas Zohren
26b8605fa0 ci: Adjust to current docker 2023-06-26 22:26:33 +02:00
Timo Kösters
dbd360ebb9 Merge branch 'unbreak' into 'next'
fix rustc version, nix upkeep

See merge request famedly/conduit!482
2023-06-26 19:12:46 +00:00
Timo Kösters
48e6e0659f Merge branch 'relations' into 'next'
Add relations endpoints, edits and threads work now

See merge request famedly/conduit!487
2023-06-26 19:11:57 +00:00
Timo Kösters
72eb1972c1
Add relations endpoints, edits and threads work now 2023-06-26 12:38:51 +02:00
Timo Kösters
63cbaedb79 Merge branch 'bearerfix' into 'next'
fix: send correct bearer token to appservices

Closes #358

See merge request famedly/conduit!486
2023-06-26 07:16:20 +00:00
Timo Kösters
db6def8800
fix: send correct bearer token to appservices 2023-06-26 09:15:52 +02:00
Timo Kösters
caa841c434 Merge branch 'contextfix' into 'next'
fix: /context for element android. start and end must be set even with limit=0

See merge request famedly/conduit!485
2023-06-26 06:34:10 +00:00
Timo Kösters
49a0f3a60d
fix: /context for element android. start and end must be set even with limit=0 2023-06-26 08:33:31 +02:00
Timo Kösters
bac82f43af Merge branch 'compressionoff' into 'next'
Disable compression, see https://en.wikipedia.org/wiki/BREACH

See merge request famedly/conduit!484
2023-06-25 21:46:34 +00:00
Timo Kösters
15cc801840
Disable compression, see https://en.wikipedia.org/wiki/BREACH 2023-06-25 23:43:54 +02:00
Timo Kösters
5f9ca8e458 Merge branch 'threads' into 'next'
feat: WIP relationships and threads

See merge request famedly/conduit!483
2023-06-25 20:59:53 +00:00
Timo Kösters
c7e0ea525a
feat: WIP relationships and threads 2023-06-25 19:40:33 +02:00
Charles Hall
abd0a014e8
nixpkgs' rocksdb is too old :( 2023-06-17 17:04:57 -07:00
Charles Hall
4a7d3c7301
upgrade rust in Cargo.toml/flake.nix
Looks like this should've happened as part of !479.
2023-06-17 17:04:11 -07:00
Charles Hall
15e60818c9
pin nixos-unstable, update flake.lock
`nixos-unstable` is the rolling release channel of NixOS. The default is
the master branch, which doesn't always have a populated binary cache
and so may result in compiling a bunch of stuff unnecessarily.
2023-06-17 17:02:10 -07:00
Timo Kösters
def079267d Merge branch 'adjust-to-rust-version-bumps' into 'next'
chore(ci): Adjust to rust version bumps

See merge request famedly/conduit!479
2023-06-10 15:35:22 +00:00
Jonas Zohren
a3a9b60abc chore(ci): Adjust to rust version bumps 2023-06-10 15:35:22 +00:00
Timo Kösters
808b12f618 Merge branch 'restricted' into 'next'
fix: restricted room error is now FORBIDDEN

See merge request famedly/conduit!478
2023-06-08 18:52:03 +00:00
Timo Kösters
faa9208a3e
cargo fmt 2023-06-08 20:51:34 +02:00
Timo Kösters
1ea27c4f97
fix: restricted room error is now FORBIDDEN 2023-06-08 20:49:42 +02:00
Timo Kösters
422ee40107 Merge branch 'mr-conduit-appservice-login' into 'next'
feat: support end to bridge encryption

See merge request famedly/conduit!454
2023-05-26 12:48:23 +00:00
Timo Kösters
0280fa5793 Merge branch 'next' into 'next'
fix nix readme to work with ipv6

See merge request famedly/conduit!475
2023-05-26 12:30:02 +00:00
digital
664d6baace fix: make requested changes 2023-05-26 13:06:28 +02:00
be9196430d fix nix readme to work with ipv6 2023-05-25 18:21:01 +00:00
Jonas Zohren
533bccad8f Merge branch '350-debian-package-from-docker-packager-result-doesn-t-seem-to-install-configuration-files' into 'next'
Fix CI + Debian packaging

Closes #350

See merge request famedly/conduit!474
2023-05-21 20:41:08 +00:00
Jonas Zohren
a4261aac76 * Fix Debian builds by actually including the whole debian directory into deb creation
* Fix CI by explicitly setting hostname of docker in docker service
* Fix Docker build by bumping the Rust version to 1.69
* Fix cargo check in CI by bumping the Rust version to 1.69
2023-05-21 20:41:08 +00:00
Timo Kösters
c38df57279 Merge branch 'deploymd' into 'next'
Minor DEPLOY.md changes

See merge request famedly/conduit!473
2023-05-21 13:17:29 +00:00
Timo Kösters
4e2bbf9d6a
Minor DEPLOY.md changes 2023-05-21 15:16:23 +02:00
Timo Kösters
7a9ec851fc Merge branch 'bump' into 'next'
Bump dependencies

See merge request famedly/conduit!472
2023-05-21 11:45:20 +00:00
Timo Kösters
d62cd2ae51
chore: bump dependencies 2023-05-21 13:42:59 +02:00
Timo Kösters
49b5af6d45
chore: bump rocksdb 2023-05-21 13:41:51 +02:00
Timo Kösters
1f1444da8c Merge branch 'pushrules' into 'next'
Improvements to pushrules endpoints

Closes #316

See merge request famedly/conduit!461
2023-05-21 10:41:31 +00:00
Timo Kösters
2a9a908343 Merge branch 'x4u/add-apache-cloudflare-deploy-info' into 'next'
X4u/add apache cloudflare deploy info

See merge request famedly/conduit!471
2023-05-21 07:04:59 +00:00
x4u
921b266d86 X4u/add apache cloudflare deploy info 2023-05-21 07:04:58 +00:00
Timo Kösters
dbbd164e39 Merge branch 'admin-command-fix' into 'next'
Recognize admin commands without : after tag

See merge request famedly/conduit!470
2023-05-17 19:09:54 +00:00
Jonathan Flueren
f5e3b0e2dd Recognize admin commands without : after tag
Very useful since many Matrix clients don't insert : after user tags
2023-05-15 19:25:57 +00:00
Timo Kösters
1b9e63f426 Merge branch 'unbreak-nix' into 'next'
Fix Nix builds (actually) (for real) (it seriously builds this time)

See merge request famedly/conduit!466
2023-04-05 09:59:09 +00:00
Charles Hall
eb4323cc0f
use mold on linux 2023-04-04 19:16:07 -07:00
Charles Hall
a6712627e4
tiny refactor 2023-04-04 19:15:09 -07:00
Charles Hall
3be32c4dac
factor out shared things 2023-04-04 17:52:15 -07:00
Charles Hall
55149e3336
use crane instead of naersk
I guess naersk still doesn't support git dependencies using workspace
inheritance, but crane does.
2023-04-04 17:42:24 -07:00
Charles Hall
2b63e46fc5
use system rocksdb
This mostly just improves build times.
2023-04-04 17:42:13 -07:00
Charles Hall
a0c449e570
update flake.lock 2023-04-04 17:09:49 -07:00
Charles Hall
c997311bea
Revert "build(nix): fix flake builds"
This reverts commit 5d913f7010.

Sorry, I don't understand how any of this works, and it seems pretty
opaque/difficult to fine-tune.
2023-04-04 17:02:34 -07:00
Kévin Commaille
1929ca5d9d
Add a database migration to fix and update the default pushrules 2023-03-18 15:03:57 +01:00
Kévin Commaille
88c6bf7595
Always return an error if a push rule is not found 2023-03-18 15:03:57 +01:00
Kévin Commaille
4635644e21
Use the ruma methods for managing rulesets 2023-03-18 15:03:57 +01:00
Kévin Commaille
f53ecaa97d
Bump Ruma 2023-03-18 15:03:56 +01:00
Timo Kösters
f704169aeb Merge branch 'fixjoin' into 'next'
fix: let requests continue event if client disconnects

See merge request famedly/conduit!464
2023-03-18 07:59:36 +00:00
Timo Kösters
2a7c4693b8
fix: don't accept new requests when shutting down 2023-03-18 08:58:20 +01:00
Timo Kösters
da3871f39a
fix: let requests continue event if client disconnects 2023-03-17 22:45:13 +01:00
Timo Kösters
664ee7d89a Merge branch 'backfill' into 'next'
feat: handle backfill requests

See merge request famedly/conduit!459
2023-03-16 13:32:42 +00:00
Timo Kösters
42b12934e3
Don't crash when a room errors 2023-03-13 10:43:09 +01:00
Timo Kösters
63f787f635
Reduce logs from info to debug 2023-03-13 10:39:19 +01:00
Timo Kösters
a1bd348977
fix: history visibility 2023-03-13 10:39:19 +01:00
Timo Kösters
27f29ba699
fix: SRV lookups should end with a period 2023-03-13 10:39:19 +01:00
Timo Kösters
cb0ce5b08f
Logs for server resolution 2023-03-13 10:39:18 +01:00
Timo Kösters
b7c99788e4
All the logs 2023-03-13 10:39:18 +01:00
Timo Kösters
2316d89048
Even more logging 2023-03-13 10:39:18 +01:00
Timo Kösters
bde4880c1d
fix: don't unwrap server keys 2023-03-13 10:39:18 +01:00
Timo Kösters
8b648d0d3f
fix: force abort federation requests after 2 minutes 2023-03-13 10:39:18 +01:00
Timo Kösters
4617ee2b6b
More logging for remote joins 2023-03-13 10:39:18 +01:00
Timo Kösters
10fa686c77
feat: respect history visibility 2023-03-13 10:39:18 +01:00
Timo Kösters
2a16a5e967
fix: don't send nulls as unsigned content 2023-03-13 10:39:17 +01:00
Timo Kösters
2aa0a2474b
fix: ignore unparsable pdus in /send 2023-03-13 10:39:17 +01:00
Timo Kösters
d39003ffc0
Allow backfilling create event itself 2023-03-13 10:39:17 +01:00
Timo Kösters
eae0989c40
fix: refactor backfill and add support for search 2023-03-13 10:39:17 +01:00
Timo Kösters
17a6431f5f
fix: make backfilled events reachable 2023-03-13 10:39:17 +01:00
Timo Kösters
fcfb06ffa6
fix: allow handling create event itself 2023-03-13 10:39:17 +01:00
Timo Kösters
7bdd9660aa
feat: ask for backfill 2023-03-13 10:39:17 +01:00
Timo Kösters
23b18d71ee
feat: handle backfill requests
Based on https://gitlab.com/famedly/conduit/-/merge_requests/421
2023-03-13 10:39:16 +01:00
Timo Kösters
84cfed5231 Merge branch 'fix-nix-build' into 'next'
Fix Nix flake build

See merge request famedly/conduit!456
2023-02-18 21:18:51 +00:00
Timo Kösters
cdcf4a017d Merge branch 'fixreset' into 'next'
fix: allow reactivation of users using reset-password admin command

See merge request famedly/conduit!458
2023-02-11 11:49:54 +00:00
Timo Kösters
fc0aff20cf
fix: allow reactivation of users using reset-password admin command 2023-02-11 12:43:41 +01:00
Timo Kösters
4223288cdf Merge branch 'fixbadservernameusers' into 'next'
fix: ignore bad user ids in migration

See merge request famedly/conduit!457
2023-02-07 16:19:59 +00:00
Timo Kösters
a4f18f99ad
fix: ignore bad user ids 2023-02-07 16:29:41 +01:00
Jonas Zohren
06df04f61c Merge branch 'fix_docker_healthcheck_for_address' into 'next'
Add a dynamic address resolution to the Docker healthcheck

See merge request famedly/conduit!434
2023-01-27 22:43:04 +00:00
Moritz Heiber
cfcc9086ff Add a dynamic address resolution to the Docker healthcheck 2023-01-27 22:43:04 +00:00
Yusuf Bera Ertan
11b9cfad5e
docs: update nix comment for rust-version in Cargo.toml 2023-01-28 00:14:58 +03:00
Yusuf Bera Ertan
5d913f7010
build(nix): fix flake builds 2023-01-28 00:10:21 +03:00
Jonas Zohren
d68dad580b Merge branch 'complement-improvements' into 'next'
Complement improvements

See merge request famedly/conduit!404
2023-01-27 16:41:34 +00:00
Jonathan de Jong
e13dc7c14a add little readme 2023-01-26 18:28:33 +01:00
Jonathan de Jong
b158896396 Merge remote-tracking branch 'origin/next' into complement-improvements 2023-01-26 18:19:39 +01:00
Timo Kösters
f95dd4521c Merge branch 'validate-state-of-admins-room' into 'next'
Validate PDU in admins room

See merge request famedly/conduit!382
2023-01-24 13:46:49 +00:00
Timo Kösters
1e77373332 Merge branch 'braid/ci-magic' into 'next'
fix: adjust CI config to runner requirements

See merge request famedly/conduit!455
2023-01-19 07:46:36 +00:00
The one with the braid
f01b96588d fix: adjust CI config to runner requirements
- make use of more stable BTRFS driver
- set default pull policy to `if-not-present`

Signed-off-by: The one with the braid <the-one@with-the-braid.cf>
2023-01-19 07:42:23 +01:00
digital
4d589d9788 feat: support end to bridge encryption
by implementing appservice logins
2023-01-18 23:34:18 +01:00
Timo Kösters
815db0d962 Merge branch 'joinfix' into 'next'
Maybe fix room joins

See merge request famedly/conduit!453
2023-01-14 20:22:55 +00:00
Timo Kösters
809c9b4481
Maybe fix room joins
This is a workaround for https://github.com/hyperium/hyper/issues/2312
2023-01-14 21:20:16 +01:00
Timo Kösters
c6e3438e76 Merge branch 'trusted-servers-doc' into 'next'
document `trusted_servers` option

See merge request famedly/conduit!451
2023-01-09 16:35:54 +00:00
Charles Hall
844508bc48
document trusted_servers option 2023-01-09 08:14:13 -08:00
Timo Kösters
b3aec63d67 Merge branch 'partial-nix-fix' into 'next'
partial nix fix

See merge request famedly/conduit!446
2023-01-09 15:58:40 +00:00
Timo Kösters
2da4ae6b3b Merge branch 'code-of-conduct' into 'next'
Add Contributor's Covenant Code of Conduct

See merge request famedly/conduit!448
2023-01-09 15:37:19 +00:00
Timo Kösters
5e6b498c22 Merge branch 'fix-nix-docs' into 'next'
fix nix docs

See merge request famedly/conduit!449
2023-01-09 15:35:39 +00:00
Charles Hall
391beddaf4
fix nix docs
I made some silly copy paste errors while writing this...
2023-01-08 12:44:59 -08:00
r3g_5z
112b76b1c1
Add Contributor's Covenant Code of Conduct
Signed-off-by: r3g_5z <june@girlboss.ceo>
2023-01-08 02:44:25 -05:00
Charles Hall
315944968b
remind people to update the hash
And offer help since it's pretty easy but impossible if you don't have
Nix installed.
2022-12-23 00:30:36 -08:00
Charles Hall
9f74555c88
update flake.lock 2022-12-23 00:29:43 -08:00
Charles Hall
0a4e8e5909
update rust toolchain hash 2022-12-23 00:29:43 -08:00
Timo Kösters
19156c7bbf
Update Cargo.lock 2022-12-21 16:16:07 +01:00
Timo Kösters
2a66ad4329
Bump version to 0.6.0-alpha 2022-12-21 16:08:05 +01:00
Timo Kösters
53f14a2c4c
Merge remote-tracking branch 'origin/next' 2022-12-21 15:51:47 +01:00
Timo Kösters
d20f21ae32 Merge branch 'nextversion' into 'next'
Preparing for v0.5.0 release

See merge request famedly/conduit!443
2022-12-21 13:12:18 +00:00
Timo Kösters
f7db3490f6
Bump version to v0.5.0 2022-12-21 14:08:09 +01:00
Timo Kösters
c7a7c913d4
Bump ruma 2022-12-21 14:08:08 +01:00
Timo Kösters
76a82339a2
tweak default rocksdb settings 2022-12-21 13:44:23 +01:00
Timo Kösters
94df9cdbba Merge branch 'Nyaaori/prev_events-config-option' into 'next'
Make prev_events fetch limit configurable

See merge request famedly/conduit!422
2022-12-21 11:06:42 +00:00
Timo Kösters
b231d7f15c Merge branch 'Nyaaori/membership-events-reason' into 'next'
Implement membership ban/join/leave/invite reason

See merge request famedly/conduit!442
2022-12-21 10:58:05 +00:00
Nyaaori
7cc346bc18
feat: Implement membership ban/join/leave/invite reason support 2022-12-21 11:45:12 +01:00
Timo Kösters
48bc0db723 Merge branch 'Nyaaori/code-cleanup' into 'next'
Code Cleanup

See merge request famedly/conduit!441
2022-12-21 10:00:40 +00:00
Nyaaori
7c196f4e00
feat: Add max prev events config option, allowing adjusting limit for prev_events fetching 2022-12-21 10:55:32 +01:00
Nyaaori
c86313d4fa
chore: code cleanup
https://rust-lang.github.io/rust-clippy/master/index.html#op_ref

https://rust-lang.github.io/rust-clippy/master/index.html#str_to_string

https://rust-lang.github.io/rust-clippy/master/index.html#needless_lifetimes
2022-12-21 10:42:12 +01:00
Timo Kösters
7b98741163 Merge branch 'restrictedjoinlocally' into 'next'
improvement: handle restricted joins locally

See merge request famedly/conduit!439
2022-12-18 08:51:31 +00:00
Timo Kösters
2a04c213f9
improvement: handle restricted joins locally 2022-12-18 09:44:46 +01:00
Timo Kösters
d7eaa9c5cc Merge branch 'logging-cleanup' into 'next'
Replace println/dbg calls with corresponding macros from tracing crate

See merge request famedly/conduit!424
2022-12-18 06:57:23 +00:00
Timo Kösters
2a0515f528
Replace println/dbg calls with corresponding macros from tracing crate 2022-12-18 07:52:22 +01:00
Timo Kösters
3930fd08d7 Merge branch 'docs' into 'next'
Update README

See merge request famedly/conduit!438
2022-12-18 06:06:26 +00:00
Timo Kösters
683eefbd0b
Update README 2022-12-18 07:02:07 +01:00
Timo Kösters
d963ad8cc1 Merge branch 'jaegerfix' into 'next'
fix: jaeger support

See merge request famedly/conduit!437
2022-12-18 05:52:49 +00:00
Timo Kösters
6d5e54a66b
fix: jaeger support 2022-12-18 06:37:03 +01:00
Timo Kösters
2b2bfb91c2 Merge branch 'up-ruma' into 'next'
Upgrade Ruma

See merge request famedly/conduit!435
2022-12-18 05:05:33 +00:00
Timo Kösters
f1d2574651
finish upgrade ruma 2022-12-17 09:28:08 +01:00
Jonas Platte
d39ce1401d
WIP: Upgrade Ruma 2022-12-16 11:57:32 +01:00
Jonas Platte
7fd5b22e3b
The procMacro option has long been on by default
… and it's good to let people have their own local configs that won't be
tracked by git.
2022-12-16 10:12:11 +01:00
Timo Kösters
db7a7085f4 Merge branch 'fix/pushrules_database' into 'next'
Migrate database to use correct rule id in pushrules.

See merge request famedly/conduit!405
2022-12-16 08:38:49 +00:00
Timo Kösters
5894d35eb2 Merge branch 'fixrestrictedjoin' into 'next'
fix: rejoining restricted rooms over federation

See merge request famedly/conduit!431
2022-11-30 21:32:12 +00:00
Timo Kösters
b9fd6127e2
fix: rejoining restricted rooms over federation 2022-11-30 22:30:55 +01:00
Jonas Zohren
bb9bc0a001 Merge branch 'dockerdoc-nginx-content-type' into 'next'
Describe a better way to enforce Content-Type in nginx

See merge request famedly/conduit!415
2022-11-29 17:56:14 +00:00
Timo Kösters
f4dd051a1d Merge branch 'sd-notify' into 'next'
call sd-notify after init and before exit

See merge request famedly/conduit!426
2022-11-28 15:50:12 +00:00
Vladan Popovic
06d3efc4d0 feat(systemd): call sd-notify after init and before exit 2022-11-27 22:17:15 +01:00
Vladan Popovic
66ad114e19 feat: add systemd feature flag 2022-11-27 22:17:15 +01:00
Jonas Zohren
4b737b46ac Merge branch 'cross-compiling' into 'next'
Added cross-compilation instructions

See merge request famedly/conduit!430
2022-11-27 20:15:48 +00:00
Orhideous
bcd522e75f Added cross-compilation instructions to DEPLOY.md 2022-11-27 20:15:47 +00:00
Jonas Zohren
249960b111 Merge branch 'fix-lock' into 'next'
Update Cargo.lock

See merge request famedly/conduit!427
2022-11-25 21:44:31 +00:00
Andriy Kushnir (Orhideous)
583aea187b
Update Cargo.lock 2022-11-25 23:13:58 +02:00
Timo Kösters
396dac6d82 Merge branch 'fixroomleave' into 'next'
fix: unable to leave room

See merge request famedly/conduit!419
2022-11-21 20:04:27 +00:00
Timo Kösters
9149be31af Merge branch 'logs-cleanup' into 'next'
Clean some noisy logs

See merge request famedly/conduit!423
2022-11-21 20:03:17 +00:00
Timo Kösters
32a4ded4a1 Merge branch 'Nyaaori/reduce-generated-token-length' into 'next'
Reduce length of generated access tokens and session ids

See merge request famedly/conduit!386
2022-11-21 20:02:20 +00:00
Timo Kösters
e3dabdf525 Merge branch 'Nyaaori/cleanup' into 'next'
misc. cleanup

See merge request famedly/conduit!420
2022-11-21 19:59:45 +00:00
Nyaaori
b59304a4df
Reduce length of generated access tokens and session ids
Reduces generated tokens and session ids down to 32 characters (~190 bits of entropy) in length
2022-11-21 20:51:59 +01:00
Nyaaori
66bc41125c
refactor: cleanup 2022-11-21 20:50:39 +01:00
Nyaaori
6786c44f4d
chore: Fix MSRV
Ruma requires Rust 1.64
2022-11-21 20:50:30 +01:00
Andriy Kushnir (Orhideous)
a3a1db124d
Clean some noisy logs 2022-11-21 21:48:06 +02:00
Timo Kösters
3b3c451c83
fix: unable to leave room 2022-11-21 19:50:48 +01:00
Timo Kösters
cf99316082 Merge branch 'dendritefix' into 'next'
Dendrite invite fix

See merge request famedly/conduit!416
2022-11-09 20:27:22 +00:00
Timo Kösters
c063700255
fix: invite dendrite users 2022-11-09 21:14:17 +01:00
Timo Kösters
7540227388
chore: bump dependencies 2022-11-09 18:46:10 +01:00
Ticho 34782694
09015f113c Describe a better way to enforce Content-Type in nginx
add_header will not override the Content-Type header set by the server,
but will instead add another header below, which is obviously not ideal.

The proposed change will instead tell nginx to set the correct value for
this header straight away.
2022-11-08 15:56:24 +00:00
Paul Beziau
a2d8aec1e3 Moving the unwraping of a variable
Moving the unwraping of the variable "rule" inside the condition instead of the if body, for the migration of the database from version 11 to 12.
2022-11-03 13:12:53 +00:00
Timo Kösters
ccdaaceb33 Merge branch 'ci-revamp-2022-10' into 'next'
Fix CI

See merge request famedly/conduit!414
2022-11-02 16:47:48 +00:00
Jonas Zohren
b37876f3b2 fix(ci): Only build in (remote host) docker and switch to glibc 2022-11-02 12:12:48 +01:00
Timo Kösters
e8e0a4dcc5 Merge branch 'Nyaaori/fix-trusted-server-panic' into 'next'
Cleanly handle invalid response from trusted server instead of panicking

See merge request famedly/conduit!411
2022-10-31 11:35:55 +00:00
Nyaaori
23cf39c525
Cleanly handle invalid response from trusted server instead of panicking 2022-10-31 12:28:30 +01:00
Nyaaori
00996dd834
Cargo Clippy 2022-10-31 09:31:17 +01:00
Timo Kösters
2a52f666dc Merge branch 'fixtyping' into 'next'
Fix typing indicators and unencrypted messages in encrypted rooms

See merge request famedly/conduit!409
2022-10-30 20:25:45 +00:00
Timo Kösters
0cf6545116
fix: not sending enough state on join 2022-10-30 21:23:43 +01:00
Timo Kösters
5d691f405e
fix: stuck typing indicators 2022-10-30 21:22:32 +01:00
Timo Kösters
c61914c8e1 Merge branch 'fixhead' into 'next'
fix: HEAD requests should continue to produce METHOD_NOT_ALLOWED

See merge request famedly/conduit!402
2022-10-30 19:45:58 +00:00
Timo Kösters
9548c84d32 Merge branch 'fixnotifcount' into 'next'
fix: element android did not reset notification counts

See merge request famedly/conduit!408
2022-10-30 19:43:39 +00:00
Timo Kösters
02dd3d32f2
fix: element android did not reset notification counts 2022-10-30 20:41:32 +01:00
Timo Kösters
7c98ba64aa
fix: HEAD requests should produce METHOD_NOT_ALLOWED 2022-10-30 19:53:05 +01:00
Jonathan de Jong
52018c3967 allow complement dockerfile to copy over target folder 2022-10-28 21:04:05 +02:00
Timo Kösters
e86fb11512 Merge branch 'nabulator-next-patch-84388' into 'next'
Update nginx configuration to allow for larger uploads.

See merge request famedly/conduit!407
2022-10-28 13:32:02 +00:00
Timo Kösters
20e3c42456 Merge branch 'add-nix-flake' into 'next'
add nix flake

See merge request famedly/conduit!403
2022-10-28 13:31:21 +00:00
Nabulator
1aff2a54ef comment typo 2022-10-27 04:23:07 +00:00
Nabulator
238ebcfcac Update nginx configuration to allow for larger uploads. 2022-10-27 04:20:56 +00:00
Timo Kösters
876fdf480d Merge branch '3pid_403_next' into 'next'
Return 403 to 3pid token routes to signal not implemented

See merge request famedly/conduit!375
2022-10-25 20:47:41 +00:00
James Blachly
3bc0a1924b Return 403 to 3pid token routes to signal not implemented 2022-10-25 20:47:41 +00:00
Timo Kösters
4af998963b Merge branch 'fix-axum-request-size' into 'next'
fix(main): fix request size limit to max_request_size (axum defaults 2MB)

See merge request famedly/conduit!406
2022-10-25 20:34:33 +00:00
AndSDev
10d2da3009 fix(main): fix request size limit to max_request_size (axum defaults 2MB) 2022-10-25 12:53:58 +03:00
Paul Beziau
d47c1a8ba6 Fix database version check & code formating 2022-10-21 12:27:11 +00:00
Paul Beziau
9c0c74f547 Migrate database to use correct rule id in pushrules.
it convert :
- ".m.rules.call" to ".m.rule.call"
- ".m.rules.room_one_to_one" to ".m.rule.room_one_to_one"
- ".m.rules.encrypted_room_one_to_one" to ".m.rule.encrypted_room_one_to_one"
- ".m.rules.message" to ".m.rule.message"
- ".m.rules.encrypted" to ".m.rule.encrypted"

related to issue #264
2022-10-18 09:15:07 +00:00
Jonathan de Jong
215d909e59 More debug info when try_from_http_request fails 2022-10-17 18:41:59 +02:00
Jonathan de Jong
ada15ceacc Complement improvements 2022-10-17 18:41:45 +02:00
Charles Hall
716f82db6d
add nix/nixos deployment documentation 2022-10-16 10:50:52 -07:00
Charles Hall
fe7d8c4f12
add nix flake
Also add `.envrc` for direnv + Nix users. This makes developing locally
easier for us NixOS folks.

The flake itself will allow NixOS users to pull code directly from
Conduit's repository, making it completely trivial to stay up-to-date
with every commit.

I'd also like to add a NixOS module directly to this repository at some
point so that new configuration options will be available in the NixOS
module faster. But for now, NixOS users can simply override
`serivces.matrix-conduit.package` and get pretty much all the
functionality.

I've added myself to the `CODEOWNERS` file for the Nix files, since I am
willing to maintain this stuff. I use Conduit on NixOS so I'm personally
invested in having this work.

Lastly, `.gitignore` was updated to exclude symlinks created by `direnv`
and `nix build` and other such Nix commands.

This doesn't come without maintenance burden, however:

* The `sha256` in `flake.nix` will need to be updated whenever Conduit's
  MSRV is updated, but that should be pretty infrequent.

* `nix flake update` should be run every so often to pull in updates to
  `nixpkgs` and other flake inputs. I think downstream users can also
  override this themselves with `inputs.<name>.inputs.<name>.follows`.

* `nix flake check` should be run in CI to ensure Nix builds keep
  working.

* `nixpkgs-fmt --check $(fd '\.nix')` (or similar) should be run in CI
  to ensure style uniformity.
2022-10-15 19:26:53 -07:00
Timo Kösters
cb2b5beea8 Merge branch 'fix_persy' into 'next'
fix: update persy implementation after refactor

See merge request famedly/conduit!396
2022-10-15 12:13:36 +00:00
Timo Kösters
2231a69b4c
fix: make previous MR compile 2022-10-15 14:07:27 +02:00
Timo Kösters
13052388a7
Merge branch 'conduit-next' into next 2022-10-15 13:55:39 +02:00
Max Cohen
6fd39ae174
Raise 404 when room doesn't exist
Raise 404 "Room not found" when changing or accessing room visibility
settings (`GET` and `PUT
/_matrix/client/r0/directory/list/room/{roomId}`).
See issue #290
2022-10-15 13:52:58 +02:00
Timo Kösters
2627ca5e3d Merge branch 'update-rust' into 'next'
update rust to avoid a cargo problem

See merge request famedly/conduit!395
2022-10-15 11:50:55 +00:00
Timo Kösters
ed5b8d6a46 Merge branch 'Nyaaori/fix-whoami-appservices' into 'next'
Fix is_guest value on whoami for appservice users

Closes #310

See merge request famedly/conduit!401
2022-10-15 11:32:49 +00:00
Nyaaori
2d0fdddd34
Do not return true for is_guest on whoami for appservice users 2022-10-15 13:17:58 +02:00
Timo Kösters
3054af41ba Merge branch 'Nyaaori/bump-default-room-version' into 'next'
Bump default room version to V9

See merge request famedly/conduit!400
2022-10-15 11:02:44 +00:00
Nyaaori
1e1a144dfa
Move room version 10 out of experimental/unstable 2022-10-15 12:17:06 +02:00
Nyaaori
cc3e1f58cc
Bump default room version to V9; per matrix spec recommendation 2022-10-15 12:16:02 +02:00
Timo Kösters
b1991c8f4f Merge branch 'Nyaaori/rejoin-fix' into 'next'
Rejoin over federation if we are not participating

See merge request famedly/conduit!399
2022-10-15 09:56:55 +00:00
Timo Kösters
6f7f2820ce Merge branch 'Nyaaori/restricted-join-fix' into 'next'
Fix doing restricted joins into rooms we are not participating in

See merge request famedly/conduit!398
2022-10-15 09:50:25 +00:00
Nyaaori
e9697f13d6
Handle initiating restricted joins over federation
Allows Conduit users to join restricted rooms if we are not currently participating
2022-10-15 10:46:50 +02:00
Nyaaori
3b0aa23fdf
Rejoin room over federation if we are not participating in it; do not include invited users in participating servers calculation 2022-10-15 10:38:30 +02:00
Timo Kösters
aca6218c0a Merge branch 'unrecognizedmethods' into 'next'
fix: send unrecognized error on wrong http methods

See merge request famedly/conduit!397
2022-10-15 08:35:39 +00:00
Timo Kösters
3a45628e1d
fix: send unrecognized error on wrong http methods 2022-10-15 00:28:43 +02:00
AndSDev
e923f63c49 fix(service/rooms/timeline): fix validating for non-joined members 2022-10-14 14:45:05 +03:00
Tglman
842feabced fix: update persy implementation after refactor 2022-10-13 20:02:36 +01:00
Charles Hall
286936db32
msrv is 1.63 in Cargo.toml; use that 2022-10-13 08:26:46 -07:00
Charles Hall
bf7c4b4001
update rust to avoid a cargo problem
We were hitting [this bug][0] when trying to select a version for clap
^4.

[0]: https://github.com/rust-lang/cargo/issues/10623
2022-10-13 08:06:49 -07:00
AndSDev
d755a96c2c refactor(service/rooms/timeline): add cache for server_name 2022-10-13 11:19:51 +00:00
Timo Kösters
c948324cf2 Merge branch 'fix-admin-help' into 'next'
fix `@conduit help` not working in the admin room

See merge request famedly/conduit!392
2022-10-13 11:15:49 +00:00
AndSDev
76f81ac201 feat(db/rooms): disable banning for last user and conduit user in admins room 2022-10-13 14:15:23 +03:00
Timo Kösters
ce188daccb
Merge branch 'conduit-lower-default-log-level' into HEAD 2022-10-13 13:13:03 +02:00
exin
98702da4e6
Lower default log level for docker 2022-10-13 13:11:15 +02:00
exin
92f7f0c849
Lower log level commented config options 2022-10-13 13:11:15 +02:00
exin
7451abe3ea
Lower default log level for docker and debian 2022-10-13 13:11:14 +02:00
exin
3e6c66b899
Fix formatting 2022-10-13 13:11:14 +02:00
exin
3a40bf8ae0
Add error for invalid log config
Log config falls back to "warn"
2022-10-13 13:11:14 +02:00
exin
9c922db14b
Lower default log level
Update config-example.toml to accordingly

Closes #281
2022-10-13 13:11:13 +02:00
Timo Kösters
175fba5739 Merge branch 'fix-login-token' into 'next'
fix(client/login): username in lowercase for login by token

See merge request famedly/conduit!380
2022-10-13 11:08:04 +00:00
AndSDev
912491cb28 style(db/rooms): refactor admin room pdu validating 2022-10-13 14:04:26 +03:00
AndSDev
da2dbd2877 feat(db/rooms): disable leaving from admin room for last user 2022-10-13 13:09:26 +03:00
AndSDev
c67f95ebff feat(db/rooms): disable leaving from admin room for conduit user 2022-10-13 13:01:18 +03:00
AndSDev
3a8321f9ad feat(db/rooms): encryption is not allowed in the admins room 2022-10-13 12:50:23 +03:00
Timo Kösters
f46d64e52f Merge branch 'unstability' into 'next'
Mark unstable versions as unstable in /capabilities

See merge request famedly/conduit!394
2022-10-13 09:39:45 +00:00
Timo Kösters
8c6e75a0cd
Mark unstable versions as unstable in /capabilities 2022-10-13 10:27:42 +02:00
Timo Kösters
c23b4946c5 Merge branch 'fixallthebugs' into 'next'
fix: all the e2ee problems

See merge request famedly/conduit!393
2022-10-13 08:21:12 +00:00
Timo Kösters
ac52b234fa
fix: all the e2ee problems 2022-10-13 10:15:35 +02:00
AndSDev
9a47069f45 fix(client/login): username in lowercase for login by token 2022-10-13 06:40:17 +00:00
Charles Hall
7ef9fe3454
add regression tests
This way we don't regress on accident again in the future.
2022-10-12 17:58:43 -07:00
Charles Hall
fc852f8be6
resolve cargo check --features clap/deprecated
This has no functional effects.
2022-10-12 17:55:12 -07:00
Charles Hall
4710f739c0
clap v4 turned more things into optional features
So we need to re-enable some things. See their changelog[0] for details.

[0]: https://github.com/clap-rs/clap/blob/master/CHANGELOG.md#migrating
2022-10-12 17:55:12 -07:00
Charles Hall
3c20c1b72e
fix cargo test 2022-10-12 17:55:12 -07:00
Timo Kösters
37eb686b5b Merge branch 'fixfluffy' into 'next'
fix: fluffychat login works again

See merge request famedly/conduit!391
2022-10-12 15:37:50 +00:00
Timo Kösters
fdd64fc966
fix: fluffychat login works again 2022-10-12 17:18:01 +02:00
Timo Kösters
4d982d05af Merge branch 'claimfast' into 'next'
improvement: more efficient /claim

See merge request famedly/conduit!389
2022-10-12 09:09:23 +00:00
Timo Kösters
1e725bc548 Merge branch 'fixmakejoin' into 'next'
fix: make join should not send event id

See merge request famedly/conduit!390
2022-10-12 09:08:58 +00:00
Timo Kösters
dd8f4681a2
fix: make join should not send event id 2022-10-12 10:57:54 +02:00
Timo Kösters
0290f1f355
improvement: more efficient /claim 2022-10-12 10:43:30 +02:00
Timo Kösters
cd835fc7a8 Merge branch 'initialSyncFix' into 'next'
Initial sync fix

See merge request famedly/conduit!388
2022-10-11 21:24:19 +00:00
Timo Kösters
2b70d9604a
fix: element gets stuck in /initialSync 2022-10-11 23:07:03 +02:00
Timo Kösters
d3968c2fd1
fix: bump ruma again to fix state res problems 2022-10-11 21:51:20 +02:00
Timo Kösters
8105c5cc60
cargo fmt 2022-10-11 18:10:51 +02:00
Timo Kösters
d1e5acd7b3
fix: don't panic on missing events in state 2022-10-11 17:59:49 +02:00
Timo Kösters
68227c06c3
fix: state for left rooms 2022-10-11 17:10:56 +02:00
Timo Kösters
31d1801912
fix: workaround for missing avatars on element and rooms becoming historical 2022-10-11 17:10:09 +02:00
Timo Kösters
fb6bfa9753
fix: missing field origin error with synapse servers 2022-10-11 15:25:10 +02:00
Timo Kösters
c30cc6120b
fix: send right errors on make/send join in restricted rooms 2022-10-11 11:53:13 +02:00
Nyaaori
2b7c19835b
Add room version 10 to experimental versions 2022-10-10 15:00:44 +02:00
Timo Kösters
c2a5315e9f
Merge branch 'm0dex/fix-signature-upload' into 'next'
fix(client/keys): ignore all but signed keys in signature upload route

See merge request famedly/conduit!378
2022-10-10 14:42:23 +02:00
Jakub Kubík
0ddc3c01ef
style(client/keys): rename signature key to signed key 2022-10-10 14:41:43 +02:00
Jakub Kubík
c15205fb46
fix(client/keys): ignore non-signature keys in signature upload route 2022-10-10 14:41:00 +02:00
Jonas Zohren
cb837d5a1c
Merge branch 'conduit-dockerfile-db-path' into 'next'
Dockerfile: changing DB path to be same as we are using in CI

See merge request famedly/conduit!371
2022-10-10 14:40:19 +02:00
majso
18ca2e4c29
Dockerfile: changing DB path to be same as we are using in CI 2022-10-10 14:39:36 +02:00
Timo Kösters
a10dae38e2
Merge branch 'v4' into 'next'
Bump version to 0.4

See merge request famedly/conduit!368
2022-10-10 14:38:56 +02:00
Timo Kösters
7cf060ae5b
Bump version to 0.4 2022-10-10 14:38:17 +02:00
Timo Kösters
de9b0cec50
Merge branch 'lightning_bolt_option' into 'next'
Lightning bolt optional

See merge request famedly/conduit!366
2022-10-10 14:35:56 +02:00
Jonas Zohren
773eded0af
Merge branch 'ci-split-cargo-test-and-clippy' into 'next'
Feat(ci): Split clippy into own fallible job

See merge request famedly/conduit!367
2022-10-10 14:35:15 +02:00
Jim
df8703cc13
Lightning bolt optional 2022-10-10 14:34:28 +02:00
Jonas Zohren
71cffcd537
feat(ci): Split clippy into own fallible job
For some reason, the clippy build does not work.
This change allows the cargo:test job to still succeed
and the pipeline to pass
2022-10-10 14:13:18 +02:00
Nyaaori
f430b87459
cargo clippy 2022-10-10 14:09:11 +02:00
Timo Kösters
ca82b2940d
fix: sending does not work
We were inserting one too many 0xff bytes
2022-10-10 14:02:05 +02:00
Timo Kösters
229444c932
Use ring-compat feature so out signing keys work again 2022-10-10 14:02:04 +02:00
Timo Kösters
076e9810ba
cargo fix 2022-10-10 14:02:04 +02:00
Timo Kösters
6b131202b9
Bump ruma 2022-10-10 14:02:04 +02:00
Timo Kösters
275c6b447d
Bump some dependencies 2022-10-10 14:02:04 +02:00
Timo Kösters
1a7893dbbd
fix: update state_cache on join over federation 2022-10-10 14:02:03 +02:00
Timo Kösters
5a04559cb4
fix: maintain server list again 2022-10-10 14:02:03 +02:00
Timo Kösters
25c3d89f28
Bump rust version for const fn RwLock::new 2022-10-10 14:02:03 +02:00
Timo Kösters
8b5b7a1f63
fix: panic on launch
Now we start the admin and sending threads at a later time.
2022-10-10 14:02:02 +02:00
Timo Kösters
50b0eb9929
cargo fix 2022-10-10 14:02:02 +02:00
Timo Kösters
7822a385bb
cargo fmt 2022-10-10 14:02:02 +02:00
Timo Kösters
d5b4754cf4
0 errors left! 2022-10-10 14:02:02 +02:00
Timo Kösters
f47a5cd5d5
cargo fix 2022-10-10 14:02:01 +02:00
Timo Kösters
a4637e2ba1
cargo fmt 2022-10-10 14:02:01 +02:00
Timo Kösters
33a2b2b772
37 errors left 2022-10-10 14:02:01 +02:00
Timo Kösters
44fe6d1554
127 errors left 2022-10-10 14:02:00 +02:00
Timo Kösters
cff52d7ebb
messing around with arcs 2022-10-10 14:02:00 +02:00
Timo Kösters
face766e0f
messing with trait objects 2022-10-10 14:02:00 +02:00
Timo Kösters
8708cd3b63
431 errors left 2022-10-10 14:02:00 +02:00
Timo Kösters
bd8b616ca0
Fixed more compile time errors 2022-10-10 13:54:00 +02:00
Nyaaori
785ddfc4aa
refactor: prepare for more splits 2022-10-10 13:52:52 +02:00
Nyaaori
232978087a
refactor: prepare database/key_value/media.rs from service/media.rs 2022-10-10 13:52:07 +02:00
Nyaaori
7946c5f29e
refactor: prepare service/account_data/mod.rs from service/account_data.rs 2022-10-10 13:52:07 +02:00
Nyaaori
efad401751
refactor: prepare service/account_data/data.rs from service/account_data.rs 2022-10-10 13:52:07 +02:00
Nyaaori
e1e87b8d0c
refactor: prepare service/admin/mod.rs from service/admin.rs 2022-10-10 13:52:07 +02:00
Nyaaori
c6d1421e81
refactor: prepare service/key_backups/mod.rs from service/key_backups.rs 2022-10-10 13:52:06 +02:00
Nyaaori
5a29511d34
refactor: prepare service/key_backups/data.rs from service/key_backups.rs 2022-10-10 13:52:06 +02:00
Nyaaori
d024d205c0
refactor: prepare service/media/mod.rs from service/media.rs 2022-10-10 13:52:06 +02:00
Nyaaori
4649cd82b5
refactor: prepare database/key_value/globals.rs from service/globals.rs 2022-10-10 13:52:05 +02:00
Timo Kösters
057f8364cc
fix: some compile time errors
Only 174 errors left!
2022-10-10 13:25:01 +02:00
Timo Kösters
82e7f57b38
refactor state accessor, state cache, user, uiaa 2022-10-10 13:21:09 +02:00
Nyaaori
3e22bbeecd
refactor: prepare for state accessor, state cache, user, and uiaa 2022-10-10 13:20:05 +02:00
Nyaaori
213579ee9d
refactor: prepare database/key_value/uiaa.rs from service/uiaa/mod.rs 2022-10-10 13:19:31 +02:00
Nyaaori
810a6baf34
refactor: prepare service/uiaa/data.rs from service/uiaa/mod.rs 2022-10-10 13:19:31 +02:00
Nyaaori
61f6ac0d66
refactor: prepare service/rooms/state_accessor/data.rs from service/rooms/state_accessor/mod.rs 2022-10-10 13:19:31 +02:00
Nyaaori
6d981f37a2
refactor: prepare database/key_value/rooms/state_accessor.rs from service/rooms/state_accessor/mod.rs 2022-10-10 13:19:30 +02:00
Nyaaori
7e0b8ec0ac
refactor: prepare database/key_value/rooms/user.rs from service/rooms/user/mod.rs 2022-10-10 13:19:30 +02:00
Nyaaori
19743ae195
refactor: prepare service/rooms/user/data.rs from service/rooms/user/mod.rs 2022-10-10 13:19:30 +02:00
Jakub Kubík
fd0ea4bf71
feat(database/presence): add skeleton for presence maintenance 2022-10-10 13:00:55 +02:00
Timo Kösters
f56424bc8d
Refactor appservices, pusher, timeline, transactionids, users 2022-10-10 13:00:53 +02:00
Nyaaori
01bf348811
refactor: prepare for appservices, pusher, timeline, transactionids, and users 2022-10-10 13:00:06 +02:00
Nyaaori
bea5d1e0d8
refactor: prepare database/key_value/rooms/timeline.rs from service/rooms/timeline/mod.rs 2022-10-10 12:56:13 +02:00
Nyaaori
e8b33e8c5a
refactor: prepare service/rooms/timeline/data.rs from service/rooms/timeline/mod.rs 2022-10-10 12:56:13 +02:00
Nyaaori
dc7670f3a8
refactor: prepare service/users/mod.rs from service/users.rs 2022-10-10 12:56:12 +02:00
Nyaaori
94ce06bb76
refactor: prepare service/users/data.rs from service/users.rs 2022-10-10 12:56:12 +02:00
Nyaaori
70863260f6
refactor: prepare service/pusher/mod.rs from service/pusher.rs 2022-10-10 12:56:12 +02:00
Nyaaori
cb9458122c
refactor: prepare service/pusher/data.rs from service/pusher.rs 2022-10-10 12:56:12 +02:00
Nyaaori
e62b0904ea
refactor: prepare database/key_value/pusher.rs from service/pusher.rs 2022-10-10 12:56:11 +02:00
Nyaaori
306ff5ee4e
refactor: prepare database/key_value/users.rs from service/users.rs 2022-10-10 12:56:11 +02:00
Timo Kösters
e045abe961
refactor: work on auth chain and state compressor 2022-10-10 11:18:53 +02:00
Nyaaori
0daa3209db
refactor: prepare for auth chain and state compressor 2022-10-10 11:17:43 +02:00
Nyaaori
8d0ed3ec51
refactor: prepare database/key_value/rooms/state_compressor.rs from service/rooms/state_compressor/mod.rs 2022-10-10 11:17:34 +02:00
Nyaaori
691e69847f
refactor: prepare database/key_value/rooms/auth_chain.rs from service/rooms/state_compressor/mod.rs 2022-10-10 11:17:34 +02:00
Nyaaori
c8f64844ab
refactor: prepare service/rooms/auth_chain/mod.rs from service/rooms/state_compressor/mod.rs 2022-10-10 11:17:34 +02:00
Timo Kösters
b0029c49b9
refactor: work on search 2022-10-10 10:46:39 +02:00
Nyaaori
91ad250177
refactor: prepare for search work 2022-10-10 10:43:52 +02:00
Nyaaori
f6040ef2d7
refactor: prepare database/key_value/rooms/search.rs from service/rooms/timeline/mod.rs 2022-10-09 18:52:58 +02:00
Nyaaori
877ee48480
refactor: prepare database/key_value/rooms/search.rs from service/rooms/search/mod.rs 2022-10-09 18:52:58 +02:00
Timo Kösters
03e6e43ecd
refactor: split up database/key_value.rs 2022-10-09 18:23:59 +02:00
Nyaaori
6ace16abf6
refactor: prepare to split up database/key_value.rs 2022-10-09 18:23:59 +02:00
Nyaaori
158de9ca08
refactor: prepare src/database/key_value/room/outlier.rs from src/database/key_value.rs 2022-10-09 18:23:58 +02:00
Nyaaori
ea2dcf4ff0
refactor: prepare src/database/key_value/room/pdu_metadata.rs from src/database/key_value.rs 2022-10-09 18:23:58 +02:00
Nyaaori
332e7c9dba
refactor: prepare src/database/key_value/room/state.rs from src/database/key_value.rs 2022-10-09 18:23:58 +02:00
Nyaaori
0213a32e6a
refactor: prepare src/database/key_value/room/edus/typing.rs from src/database/key_value.rs 2022-10-09 18:23:57 +02:00
Nyaaori
cd3a163816
refactor: prepare src/database/key_value/room/lazy_load.rs from src/database/key_value.rs 2022-10-09 18:23:57 +02:00
Nyaaori
2950349adf
refactor: prepare src/database/key_value/room/metadata.rs from src/database/key_value.rs 2022-10-09 18:23:57 +02:00
Nyaaori
56cacf6f1c
refactor: prepare src/database/key_value/room/alias.rs from src/database/key_value.rs 2022-10-09 18:23:56 +02:00
Nyaaori
0f77ae14e4
refactor: prepare src/database/key_value/room/directory.rs from src/database/key_value.rs 2022-10-09 18:23:56 +02:00
Nyaaori
8fa990330f
refactor: prepare src/database/key_value/room/edus/presence.rs from src/database/key_value.rs 2022-10-09 18:23:56 +02:00
Nyaaori
84630f90b7
refactor: prepare src/database/key_value/room/edus/read_receipt.rs from src/database/key_value.rs 2022-10-09 18:23:56 +02:00
Jakub Kubík
1869a38b85
refactor(edus): split edus into separate modules 2022-10-09 18:23:55 +02:00
Nyaaori
e39358d375
refactor: prepare to split edus into separate modules 2022-10-09 18:23:55 +02:00
Nyaaori
c7e601eb0b
refactor: prepare service/rooms/edus/typing/data.rs from service/rooms/edus/data.rs 2022-10-09 17:38:46 +02:00
Nyaaori
ac4724e82c
refactor: prepare service/rooms/edus/read_receipt/data.rs from service/rooms/edus/data.rs 2022-10-09 17:38:23 +02:00
Nyaaori
73217f238c
refactor: prepare service/rooms/edus/presence/data.rs from service/rooms/edus/data.rs 2022-10-09 17:37:57 +02:00
Nyaaori
d410f08642
refactor: prepare src/service/rooms/edus/typing/mod.rs from src/service/rooms/edus/mod.rs 2022-10-09 17:36:08 +02:00
Nyaaori
bfccd4f136
refactor: prepare src/service/rooms/edus/presence/mod.rs from src/service/rooms/edus/mod.rs 2022-10-09 17:35:14 +02:00
Nyaaori
c21820083b
refactor: prepare src/service/rooms/edus/read_receipt/mod.rs from src/service/rooms/edus/mod.rs 2022-10-09 17:34:24 +02:00
Timo Kösters
865e35df17
Work on rooms/state, database, alias, directory, edus services, event_handler, lazy_loading, metadata, outlier, and pdu_metadata 2022-08-15 19:03:37 +02:00
Nyaaori
604b1a5cf1 refactor: Prepare src/database/key_value.rs 2022-08-15 18:58:03 +02:00
Nyaaori
81ac01c2f5
refactor: restore src/service/rooms/pdu_metadata/mod.rs 2022-08-15 18:47:01 +02:00
Nyaaori
1ccc226c6b
refactor: prepare src/database/key_value.rs from src/service/rooms/pdu_metadata/mod.rs 2022-08-15 18:47:01 +02:00
Nyaaori
0ce4446b1a
refactor: restore src/service/rooms/metadata/mod.rs 2022-08-15 18:47:00 +02:00
Nyaaori
daa969508f
refactor: restore src/service/rooms/outlier/mod.rs 2022-08-15 18:47:00 +02:00
Nyaaori
715b30a2b5
refactor: prepare src/database/key_value.rs from src/service/rooms/outlier/mod.rs 2022-08-15 18:47:00 +02:00
Nyaaori
42fe118cbe
refactor: restore src/service/rooms/edus/mod.rs 2022-08-15 18:46:59 +02:00
Nyaaori
06bfddf0da
refactor: restore src/service/rooms/lazy_loading/mod.rs 2022-08-15 18:46:59 +02:00
Nyaaori
931c8ece4a
refactor: prepare src/database/key_value.rs from src/service/rooms/metadata/mod.rs 2022-08-15 18:46:59 +02:00
Nyaaori
85e571badd
refactor: prepare src/database/key_value.rs from src/service/rooms/lazy_loading/mod.rs 2022-08-15 18:46:59 +02:00
Nyaaori
0071a9cbf4
refactor: restore src/service/rooms/directory/mod.rs 2022-08-15 18:46:58 +02:00
Nyaaori
a563b1ba9a
refactor: prepare src/database/key_value.rs from src/service/rooms/edus/mod.rs 2022-08-15 18:46:58 +02:00
Nyaaori
9e1ab74bb4
refactor: prepare src/database/key_value.rs from src/service/rooms/directory/mod.rs 2022-08-15 18:46:58 +02:00
Nyaaori
adafb335ff
refactor: restore src/service/rooms/state/mod.rs 2022-08-15 18:46:57 +02:00
Nyaaori
05487c7c15
refactor: restore src/service/rooms/alias/mod.rs 2022-08-15 18:46:57 +02:00
Nyaaori
a2a327af7c
refactor: prepare src/database/key_value.rs from src/service/rooms/state/mod.rs 2022-08-15 18:46:57 +02:00
Nyaaori
33c0e0f430
refactor: prepare src/database/key_value.rs from src/service/rooms/alias/mod.rs 2022-08-15 18:46:57 +02:00
Nyaaori
1442c64420
refactor: restore src/service/rooms/state/data.rs 2022-08-15 18:46:50 +02:00
Nyaaori
28644f236e
refactor: prepare src/database/key_value.rs from src/service/rooms/state/data.rs 2022-08-15 18:46:50 +02:00
Timo Kösters
cc80152889
refactor: split up force_state 2022-08-15 17:17:53 +02:00
Timo Kösters
dcdbcc0851
refactor: event handling code 2022-08-15 17:12:22 +02:00
Nyaaori
1b0477d569
refactor: Preparation commit to split src/service/rooms/state.rs and src/api/server_server.rs 2022-08-15 17:09:41 +02:00
Nyaaori
57c92f8044
refactor: restore src/api/server_server.rs 2022-08-15 17:09:22 +02:00
Nyaaori
e1d8c03e47
refactor: prepare splitting src/api/server_server.rs to src/service/rooms/event_handler/mod.rs 2022-08-15 17:09:15 +02:00
Nyaaori
7d2b22f58d
refactor: prepare splitting src/service/rooms/state.rs to src/service/rooms/state_accessor/mod.rs 2022-08-15 17:08:33 +02:00
Nyaaori
9efd9f06c6
refactor: prepare splitting src/service/rooms/state.rs to src/service/rooms/state/data.rs 2022-08-15 17:07:33 +02:00
Nyaaori
d0cbe46ff0
refactor: prepare splitting src/service/rooms/state.rs to src/service/rooms/state/mod.rs 2022-08-15 17:07:33 +02:00
Timo Kösters
025b64befc
refactor: renames and split room.rs 2022-08-15 16:30:34 +02:00
Nyaaori
92e59f14e0
refactor: Preparation commit to split src/database/rooms.rs 2022-08-15 16:25:38 +02:00
Nyaaori
7989c7cdda
refactor: prepare splitting src/database/rooms.rs to src/service/rooms/timeline.rs 2022-08-15 16:22:38 +02:00
Nyaaori
e22f5fef1f
refactor: prepare splitting src/database/rooms.rs to src/service/rooms/user.rs 2022-08-15 16:22:38 +02:00
Nyaaori
64a022a4d2
refactor: prepare splitting src/database/rooms.rs to src/service/rooms/state.rs 2022-08-15 16:22:37 +02:00
Nyaaori
751be39376
refactor: prepare splitting src/database/rooms.rs to src/service/rooms/state_cache.rs 2022-08-15 16:22:37 +02:00
Nyaaori
d05b84d0f5
refactor: prepare splitting src/database/rooms.rs to src/service/rooms/state_compressor.rs 2022-08-15 16:22:37 +02:00
Nyaaori
54bf91b76e
refactor: prepare splitting src/database/rooms.rs to src/service/rooms/outlier.rs 2022-08-15 16:22:36 +02:00
Nyaaori
8ed79a00fd
refactor: prepare splitting src/database/rooms.rs to src/service/rooms/pdu_metadata.rs 2022-08-15 16:22:36 +02:00
Nyaaori
8dffdadfd3
refactor: prepare splitting src/database/rooms.rs to src/service/rooms/search.rs 2022-08-15 16:22:36 +02:00
Nyaaori
2dbfbd45a2
refactor: prepare splitting src/database/rooms.rs to src/service/rooms/short.rs 2022-08-15 16:22:36 +02:00
Nyaaori
249440115b
refactor: prepare splitting src/database/rooms.rs to src/service/rooms/lazy_loading.rs 2022-08-15 16:22:35 +02:00
Nyaaori
baa8224cce
refactor: prepare splitting src/database/rooms.rs to src/service/rooms/metadata.rs 2022-08-15 16:22:35 +02:00
Nyaaori
bd7b49b098
refactor: prepare splitting src/database/rooms.rs to src/service/rooms/mod.rs 2022-08-15 16:22:35 +02:00
Nyaaori
27e2f0d545
refactor: prepare splitting src/database/rooms.rs to src/service/rooms/alias.rs 2022-08-15 16:22:34 +02:00
Nyaaori
4dc14e1580
refactor: prepare splitting src/database/rooms.rs to src/service/rooms/directory.rs 2022-08-15 16:22:34 +02:00
Timo Kösters
df16012661 Merge branch 'next' into 'master'
Release v0.4.0

See merge request famedly/conduit!369
2022-06-23 07:43:05 +00:00
Timo Kösters
e8cd85fee4 Merge branch 'v4' into 'next'
Bump version to 0.4

See merge request famedly/conduit!368
2022-06-23 07:07:44 +00:00
Timo Kösters
35fd732b04
Bump version to 0.4 2022-06-23 09:06:39 +02:00
Timo Kösters
10b1557c0e Merge branch 'lightning_bolt_option' into 'next'
Lightning bolt optional

See merge request famedly/conduit!366
2022-06-23 06:58:34 +00:00
Jim
49bd75b856 Lightning bolt optional 2022-06-23 06:58:34 +00:00
Jonas Zohren
02f8218867 Merge branch 'ci-split-cargo-test-and-clippy' into 'next'
Feat(ci): Split clippy into own fallible job

See merge request famedly/conduit!367
2022-06-22 23:16:45 +00:00
Jonas Zohren
40eebbd9d8 feat(ci): Split clippy into own fallible job
For some reason, the clippy build does not work.
This change allows the cargo:test job to still succeed
and the pipeline to pass
2022-06-22 22:14:53 +00:00
Timo Kösters
9ee199b0c3 Merge branch 'deactivate-user-command' into 'next'
Deactivate user command

See merge request famedly/conduit!337
2022-06-19 19:38:07 +00:00
Zeyphros
1c31f7905f
Update command comment to coincide with the default action 2022-06-19 18:59:49 +02:00
Zeyphros
f6183e457d
Implement command to deactivate user from admin channel
Use `leave_room` in `leave_all_rooms`

WIP: Add command to delete a list of users
also implements a flag to prevent the user from being removed from their joined rooms.

Report user deactivation failure reason

Don't send leave events by default when mass deactivating user accounts

Don't stop leaving rooms if an error was encountered

WIP: Rename command, make flags consistent, don't deactivate admin accounts.
Accounts should be deactivated as fast as possible and removing users from joined groups is completed afterwards.

Fix admin safety logic, improve command output

Continue leaving rooms if a room_id is invalid

Ignore errors from leave_room

Add notice to the list-local-users command
Output form list-local-users can be used directly without modification with the deactivate-all command

Only get mutex lock for admin room when sending message
2022-06-19 18:59:48 +02:00
Timo Kösters
2ecbcdda42 Merge branch 'upgrades' into 'next'
Upgrade dependencies

See merge request famedly/conduit!363
2022-06-19 15:11:21 +00:00
Timo Kösters
0c8e51e1b7
Upgrade dependencies 2022-06-19 15:40:14 +02:00
Timo Kösters
86b23338dd Merge branch 'password-length-consistency' into 'next'
Length of passwords consistently use the constant

See merge request famedly/conduit!361
2022-06-19 07:35:06 +00:00
Dietrich
7bee9c1c69 Length of passwords consistently use the constant 2022-06-19 07:10:47 +02:00
Timo Kösters
6ef1e8c4f9 Merge branch 'timo' into 'next'
More async

See merge request famedly/conduit!359
2022-06-18 20:56:26 +00:00
Timo Kösters
0bc03e90a1
improvement: make more things async 2022-06-18 22:55:37 +02:00
Timo Kösters
9b898248c7
feat: more admin commands, better logging 2022-06-18 22:55:34 +02:00
Timo Kösters
566dc0a6a2 Merge branch 'next' into 'next'
Remove outdated rust version info

See merge request famedly/conduit!360
2022-06-18 15:02:31 +00:00
Jim
b4be087a46 Merge branch 'JimZAH-next-patch-83655' into 'next'
Remove rust version requirement from deploy.md

See merge request JimZAH/conduit!2
2022-06-18 14:57:30 +00:00
Jim
722e553c6e Remove rust version requirement from deploy.md 2022-06-18 14:47:32 +00:00
Timo Kösters
f8547ecba4 Merge branch 'create-user-command' into 'next'
added a command to the admin bot to create a new user, even with registration disabled

See merge request famedly/conduit!354
2022-06-18 13:08:51 +00:00
Timo Kösters
e70cff196b Merge branch 'fix-deb-postinst-config' into 'next'
Remove the address override in deb generated config

See merge request famedly/conduit!344
2022-06-18 11:21:31 +00:00
Timo Kösters
0286a804f4 Merge branch 'filter-users' into 'next'
Hide users from user directory if they are only in private rooms and they don't share a room

Closes #24

See merge request famedly/conduit!325
2022-06-18 11:17:09 +00:00
Radek Stępień
7239243163 Hide users from user directory if they are only in private rooms and they don't share a room 2022-06-18 11:17:09 +00:00
Timo Kösters
124471199c Merge branch 'registration-without-username' into 'next'
Allow registration without username

Closes #111

See merge request famedly/conduit!340
2022-06-18 11:13:37 +00:00
Radek Stępień
84ec057f6e Allow registration without username 2022-06-18 11:13:37 +00:00
Timo Kösters
8e08a72229 Merge branch 'bump-docker-image-to-alpine-3-16-0' into 'next'
chore(docker): Bump base image to alpine 3.16.0

See merge request famedly/conduit!355
2022-06-18 11:05:42 +00:00
Jonas Zohren
e03a2b8636 chore(docker): Bump base image to alpine 3.16.0 2022-06-18 11:05:42 +00:00
Timo Kösters
83d3cbfa99 Merge branch 'rmsthebest-next-patch-62586' into 'next'
Added Caddy to the web proxy examples

See merge request famedly/conduit!352
2022-06-18 11:05:22 +00:00
Timo Kösters
84cb0667f3 Merge branch 'to_device-existing-txn-id' into 'next'
feat: if txn id exists in the db, skip the event

See merge request famedly/conduit!353
2022-06-18 11:04:16 +00:00
Jakub Kubík
c3924b566b feat: if txn id exists in the db, skip the event 2022-06-18 11:04:16 +00:00
Timo Kösters
0f86506288 Merge branch 'Miepee-next-patch-24570' into 'next'
Mention different database backends in DEPLOY.md

See merge request famedly/conduit!358
2022-06-18 11:03:09 +00:00
Miepee
b862283ed9 Mention different databse backends in DEPLOY.md 2022-06-16 13:23:45 +00:00
Timo Kösters
ba682fa3b4 Merge branch 'kubo6472-next-patch-16934' into 'next'
Fix FluffyChat Compatibility

See merge request famedly/conduit!357
2022-06-15 20:42:40 +00:00
Jakub Doboš
8a63a2cc68 Fix FluffyChat Compatibility 2022-06-15 13:07:07 +00:00
Timo Kösters
f6dc22127a Merge branch 'next' into 'next'
Add portforwarding + opening to the docs (+recommended extension fix)

See merge request famedly/conduit!356
2022-06-13 19:28:00 +00:00
Dietrich
bd3f9e0dbe Fix spelling. 2022-06-13 20:45:12 +02:00
Dietrich
58d784aa29 Adding a hint to closed ports in the testing section 2022-06-13 20:23:08 +02:00
Dietrich
ae8e143fe9 Add a section to Ports and forwarding 2022-06-13 20:08:18 +02:00
Dietrich
d9782c508a rust-analyzer-extension moved to rust-lang
The recommended extension id could not be found as rust-analyzer now has the id `rust-lang.rust-analyzer`
2022-06-13 20:03:30 +02:00
Timo Kösters
39bc84d81c Merge branch 'fix_panic_on_long_message' into 'next'
Don't panic when signing event fails.

Closes #232

See merge request famedly/conduit!343
2022-05-28 20:47:15 +00:00
Timo Kösters
89eb54b7ff Merge branch 'unignore-feddest-doc-test' into 'next'
enable FedDest doc-test

See merge request famedly/conduit!349
2022-05-28 20:43:37 +00:00
Jonas Zohren
8bb58061fd Merge branch 'adopt-aur-patches' into 'next'
Adjust some files to the AUR patches

See merge request famedly/conduit!351
2022-05-10 07:26:20 +00:00
Jonas Zohren
8392809eb1 Adjust some files to the AUR patches 2022-05-10 07:26:19 +00:00
=
bb033fe02a added a command to the admin bot to create a new user, even with registration disabled 2022-05-01 17:49:02 +02:00
rmsthebest
23f29d1bda Added Caddy to the web proxy examples 2022-04-17 23:08:17 +00:00
Jan Christian Grünhage
efe9d5000e enable FedDest doc-test
Doc rendering is exactly as before, but it now actually tests the code
2022-04-14 16:42:11 +02:00
Zeyphros
090d0fe684
Fix typo 2022-04-13 00:08:55 +02:00
Timo Kösters
2fcb3c8b93 Merge branch '262-missing_room_keys_endpoint' into 'next'
feat: re-register missing add_backup_keys route

Closes #262

See merge request famedly/conduit!346
2022-04-11 16:05:59 +00:00
Jakub Kubík
729d66aa11
feat: register missing add_backup_keys route 2022-04-10 14:56:43 +02:00
Paul van Tilburg
b10dbc747b
Remove the address override in deb generated config
This override was accidentally introduced by commit de6c331.
The Debian postinst script will ask for and generate a config with the
address set. This should not be overriden by what is set in the default
config and is thus a deviation from the standard docs.
2022-04-09 15:13:01 +02:00
Zeyphros
07a3a6fa9a
Return an error when signing an event fails
Prevents the server from crashing/become unresponsive when overly long
messages are sent
2022-04-08 22:05:13 +02:00
Timo Kösters
6e106b5732 Merge branch 'v9' into 'next'
Support all room versions from V3 to V9

Closes #161

See merge request famedly/conduit!257
2022-04-07 15:27:58 +00:00
Timo Kösters
00b362b43b
fix: cors warning 2022-04-07 17:09:07 +02:00
Timo Kösters
b6b27b66c8
fix: don't allow unjoined users to send typing notifications 2022-04-07 17:07:33 +02:00
Timo Kösters
3573d40027
fix warnings 2022-04-07 17:04:29 +02:00
Timo Kösters
e4600ccfef
bump ruma 2022-04-07 17:02:49 +02:00
Timo Kösters
0ae39807a4
Add V9 to list of allowed versions 2022-04-07 16:50:09 +02:00
Timo Kösters
686319e2e3
fix: error handling 2022-04-07 16:50:07 +02:00
Nyaaori
d655f4c1be
Cleanup rooms.rs, globals.rs, and pdu.rs 2022-04-07 16:48:37 +02:00
Nyaaori
4b28146ee7
Support room version 3 2022-04-07 16:44:50 +02:00
Nyaaori
d8a3b257f2
Enable room version 4 2022-04-07 16:36:27 +02:00
Nyaaori
714873694d
Refactor room version support, add default room version config 2022-04-07 16:35:10 +02:00
Timo Kösters
d81216cad7
improvement: preparing for room version 9 2022-04-07 16:26:50 +02:00
Timo Kösters
9e29dc808f Merge branch '198-support-user-password-resets' into 'next'
feat: support user password resets

Closes #198

See merge request famedly/conduit!339
2022-04-07 12:11:55 +00:00
Jakub Kubík
ada07de204 feat: support user password resets 2022-04-07 12:11:55 +00:00
Timo Kösters
2556e29984 Merge branch 'up-ruma-went-too-far' into 'next'
Upgrade Ruma

See merge request famedly/conduit!342
2022-04-07 11:37:53 +00:00
Timo Kösters
df4c38cb61
fix: remove warnings 2022-04-07 13:22:32 +02:00
Timo Kösters
2808dd2000
Ruma upgrade 2022-04-07 12:58:48 +02:00
Timo Kösters
17ad5f0595
fix: checks for incoming cross signing changes 2022-04-07 12:56:18 +02:00
Timo Kösters
b8411ae2fd
refactor: rename endpoints to match ruma 2022-04-07 12:56:17 +02:00
Timo Kösters
566833111c
refactor: small improvements 2022-04-07 12:56:16 +02:00
chenyuqide
ee96a03d60
Update ruma 2022-04-07 12:56:16 +02:00
chenyuqide
21bc099ccf
Update ruma 2022-04-07 12:56:12 +02:00
Timo Kösters
1ce03059a0 Merge branch 'next' into 'next'
Fix wrong associated type in OutgoingKind::Appservice

See merge request famedly/conduit!324
2022-04-03 19:48:25 +00:00
Timo Kösters
9ed352d4c0 Merge branch '199-fix-kick-ban-over-federation' into 'next'
fix: fix kick and ban events over federation

Closes #199

See merge request famedly/conduit!338
2022-04-03 17:59:15 +00:00
Jakub Kubík
a08c667230
docs: add comments for clarification of recent changes 2022-04-03 19:27:48 +02:00
Jakub Kubík
414c7c40c4
fix: remove our server from the list of servers to send the event PDU to 2022-04-03 19:19:57 +02:00
Jakub Kubík
1712e63e06
fix: fix kick and ban events over federation
Fix the scenario where a MembershipState change event was not sent to the server of a user kicked/banned from a room on a Conduit instance if there were not any other users from that server in the room.
2022-04-03 18:58:45 +02:00
Timo Kösters
272e27ae01 Merge branch 'appservice-pdu-send-fix' into 'next'
Send PDU to appservice if state_key is their user ID

Closes #110

See merge request famedly/conduit!331
2022-04-03 12:32:23 +00:00
Andrej Kacian
9046223e7f Send PDU to appservice if state_key is their user ID
Fixes #110.
2022-04-01 19:38:38 +02:00
Timo Kösters
0066f20bdd Merge branch 'trailingslash' into 'next'
fix: allow trailing slashes for /state/<type>/ again

See merge request famedly/conduit!336
2022-04-01 14:17:21 +00:00
Timo Kösters
a5465dfd3e
fix: allow trailing slashes for /state/<type>/ again 2022-04-01 16:00:04 +02:00
Timo Kösters
8086bee146 Merge branch 'show-config' into 'next'
Add show-config admin command

See merge request famedly/conduit!295
2022-04-01 09:01:00 +00:00
Timo Kösters
b11a3b80bc Merge branch 'shutdown-msg' into 'next'
Log caught Ctrl+C or SIGTERM for operator feedback

See merge request famedly/conduit!319
2022-04-01 08:49:28 +00:00
Timo Kösters
554146f46e Merge branch 'notify-admin-room-on-user-register' into 'next'
Notify admin room for user registrations, deactivations and password changes

See merge request famedly/conduit!318
2022-04-01 08:41:51 +00:00
Timo Kösters
7bc84dc971 Merge branch 'jplatte/up-axum' into 'next'
Upgrade axum to 0.5

See merge request famedly/conduit!335
2022-04-01 08:33:22 +00:00
Timo Kösters
d89141100c Merge branch 'insensitive-login' into 'next'
Case insensitive username login

Closes #248

See merge request famedly/conduit!323
2022-04-01 08:20:45 +00:00
Timo Kösters
f9bf465578 Merge branch 'readable' into 'next'
Fix security issue.

See merge request famedly/conduit!316
2022-04-01 07:30:05 +00:00
Jonas Platte
3933bd9a8e
Update axum feature set used 2022-03-31 22:52:16 +02:00
Jonas Platte
db0659cb3d
Upgrade axum to 0.5 2022-03-31 22:50:17 +02:00
Timo Kösters
1219535e56 Merge branch 'fix/bad-uid-crash' into 'next'
Fix crash when a bad user ID is in the database

See merge request famedly/conduit!334
2022-03-31 19:23:41 +00:00
LordMZTE
4a12a7cbc8
Fix crash when a bad user ID is in the database
To my understanding, a bad user ID can sometimes make it into the
database, which lead to a panic prior to this change.
2022-03-31 20:59:59 +02:00
Jonas Zohren
08072d2c8d Merge branch 'docker/ci-bump-alpine-version' into 'next'
chore: Bump alpine version for CI generated docker

See merge request famedly/conduit!333
2022-03-30 20:30:56 +00:00
Jonas Zohren
1ebf417c11 chore: Bump alpine version for CI generated docker 2022-03-30 20:23:04 +00:00
Jonas Zohren
a2a7c61872 Merge branch 'docker-bump-alpine-version' into 'next'
chore(docker): Bump alpine (base image) version

Closes #255

See merge request famedly/conduit!330
2022-03-18 17:52:50 +00:00
Jonas Zohren
61277452af
chore(docker): Bump alpine (base image) version 2022-03-18 18:44:05 +01:00
Timo Kösters
6be5e83e61 Merge branch 'reqwest-tls-native-roots' into 'next'
Use native root CA certificates for reqwest

See merge request famedly/conduit!329
2022-03-14 14:27:12 +00:00
Timo Kösters
c70c0129f8 Merge branch 'proxy-config-examples' into 'next'
Fix proxy config examples in config/proxy.rs

See merge request famedly/conduit!321
2022-03-14 14:26:28 +00:00
Andrej Kacian
b5b8181851 Notify admin room for user registrations, deactivations and password changes 2022-03-13 09:13:45 +01:00
Andrej Kacian
194a85d4c5 Use native root CA certificates for reqwest 2022-03-12 15:44:22 +01:00
kk
a6edf00810 Merge remote-tracking branch 'origin/next' into insensitive-login
pulling next into dev branch
2022-03-09 19:12:02 -08:00
Jonas Zohren
738f5e8f68 Merge branch 'ci-fix-cross-musl-builds' into 'next'
CI: Fix musl builds

See merge request famedly/conduit!328
2022-03-08 22:13:31 +00:00
Jonas Zohren
5a9462c9ab fix(ci): Fix musl builds
This pins the image to use for cross to a working image's sha256
2022-03-08 21:52:57 +00:00
chenyuqide
5695121f38 Fix wrong associated type in OutgoingKind::Appservice 2022-03-02 23:48:01 +08:00
reti4
8bafdc4623 fixed location of lowercase fn 2022-03-02 02:25:15 +00:00
Jonas Zohren
36a6d724fe Merge branch 'writable' into 'next'
Fix permissions

See merge request famedly/conduit!317
2022-03-01 23:55:10 +00:00
reti4
9385ea0e7c fmt fix 2022-03-01 21:23:34 +00:00
reti4
9f059ad4c3 make username login case insensitive 2022-03-01 21:03:55 +00:00
TomZ
5c6c6f272c Fix security issue.
The docs state that you need to make the config file _readable_
and then proceeds to make the file writable.

This changes it to make the file to be owned by root and readable by
anyone. This is the default for unix / linux and suggested practice
for files in /etc.
2022-02-23 10:15:33 +01:00
Andrej Kacian
65fa4b2ca4 Fix proxy config examples in config/proxy.rs 2022-02-22 22:32:38 +01:00
Jonas Zohren
6788225cac Merge branch 'fix-docker-db-dir-permissions' into 'next'
Fix(docker): Make conduit own default db path

See merge request famedly/conduit!320
2022-02-22 15:34:14 +00:00
Jonas Zohren
a5bb6786c8
fix(docker): Make conduit own default db path
When a user mounts a volume into the default volume path,
it uses the permissions and ownership from the host volume.
In most cases, this is 1000:1000, which it also uses on the inside.

If you don't mount a volume though (e.g., for testing), conduit cries:
“The database couldn't be loaded or created.”

This fix chowns the default db dir to remedy this.
2022-02-22 16:26:30 +01:00
Andrej Kacian
3b2b35aab7 Log caught Ctrl+C or SIGTERM for operator feedback 2022-02-22 00:28:46 +01:00
TomZ
949f2523f9 Fix permissions
The text just sets the ownership and ignores that defaults on unix
are to have newly created dirs be readable by everyone.
This closes the database to unauthorized users on multi-user systems.
2022-02-21 22:35:08 +01:00
Andrej Kacian
196c83939c Add show-config admin room command 2022-02-21 22:27:19 +01:00
Jonas Zohren
237645e975 Merge branch 'docs' into 'next'
docs: make all configs match

Closes #205

See merge request famedly/conduit!301
2022-02-20 10:59:56 +00:00
Jonas Zohren
86162c2c20
Merge branch 'next' into docs 2022-02-20 11:43:50 +01:00
Jonas Zohren
199c84195a Merge branch 'improve-docker-documentation' into 'next'
Improve docker documentation some

See merge request famedly/conduit!314
2022-02-20 10:43:05 +00:00
Jonas Zohren
57ac4160b7
Merge branch 'next' into docs 2022-02-20 11:42:19 +01:00
Jonas Zohren
91c648253a Merge branch 'docs-remove-obsolete-cross-readme' into 'next'
Remove the now obsolete cross readme

See merge request famedly/conduit!315
2022-02-20 10:22:47 +00:00
Jonas Zohren
5a80507006
chore(docs): Remove the now obsolete cross readme 2022-02-20 11:12:49 +01:00
Jonathan de Jong
cc14727888 revert reflow 2022-02-20 10:55:17 +01:00
Jonathan de Jong
94573a3a61 improve docker documentation some 2022-02-19 17:06:06 +01:00
Jonas Zohren
0ba0fa5f6c Merge branch 'ci-audit-dependencies' into 'next'
CI: audit dependencies

See merge request famedly/conduit!313
2022-02-19 11:25:30 +00:00
Jonas Zohren
ad6eb92bbd
feat(ci): Add dependency audit to CI tests 2022-02-19 12:19:06 +01:00
Timo Kösters
ce76041c03 Merge branch 'up-ruma' into 'next'
Update Ruma

See merge request famedly/conduit!312
2022-02-19 09:46:56 +00:00
Jonas Zohren
8f063c99d5
chore(ci): Split up tests 2022-02-18 22:29:55 +01:00
Jonathan de Jong
557d119bee change search_events_v3 to search_events::v3 2022-02-18 19:54:26 +01:00
Jonathan de Jong
e9f87e1952 update ruma 2022-02-18 15:33:14 +01:00
Jonas Zohren
f3795846b5 Merge branch 'readme' into 'next'
Slight clarification

See merge request famedly/conduit!310
2022-02-18 12:19:51 +00:00
Timo Kösters
b8eaa3be85 Merge branch 'redactfix' into 'next'
Redaction fix

Closes #235

See merge request famedly/conduit!298
2022-02-18 12:00:40 +00:00
Timo Kösters
c496e599ef Merge branch 'serde-cleanup' into 'next'
Remove useless serde roundtrips

See merge request famedly/conduit!311
2022-02-18 11:07:59 +00:00
Jonas Platte
27692a2f14
Remove useless serde roundtrips 2022-02-18 11:52:00 +01:00
TomZ
e57cd437d4 Slight clarification
Which version it started being beta in is quite irrelevant here.
2022-02-17 23:00:39 +01:00
Timo Kösters
5a99460a4c Merge branch 'not-found' into 'next'
Add a not-found route

See merge request famedly/conduit!306
2022-02-17 15:43:08 +00:00
Jonas Zohren
ba83d0ac68 Merge branch 'more-vscode-defaults' into 'next'
Provide some sane defaults for vscode developing

See merge request famedly/conduit!309
2022-02-17 15:33:03 +00:00
Jonas Zohren
bcd6c0bf53 feat: Provide sane defaults for vscode developing
This includes some extensions and a debug profile
2022-02-17 11:14:50 +00:00
Jonas Zohren
b4225cb0fc
fix(docker): use user 1000 and standard db path 2022-02-16 15:13:04 +01:00
Jonas Zohren
98b67da649
fix: Docker syntax 2022-02-16 15:13:03 +01:00
Jonas Zohren
0be8500c4f
Set all env vars in docker README 2022-02-16 15:12:40 +01:00
Jonas Zohren
97507d2880
Remove most env vars from Dockerfile 2022-02-16 15:12:40 +01:00
Jonas Zohren
c4353405a5
Suggestions from Jonas Zohren 2022-02-16 15:12:38 +01:00
Timo Kösters
de6c3312ce
docs: make all configs match 2022-02-16 15:11:46 +01:00
Jonas Zohren
c66866d890 Merge branch 'ci-lint-dockerfiles-with-hadolint' into 'next'
CI: Lint dockerfiles with hadolint

Closes #239

See merge request famedly/conduit!308
2022-02-15 19:10:12 +00:00
Jonas Zohren
b21a44ca4c
feat(ci): Lint dockerfiles with hadolint 2022-02-15 20:01:38 +01:00
Jonas Zohren
e04d4ff150 Merge branch 'ci-fix-tag-pipelines' into 'next'
Ci fix tag pipelines

Closes #229

See merge request famedly/conduit!307
2022-02-15 10:56:25 +00:00
Jonas Zohren
2645494582
fix(ci): Also run CI for git tags 2022-02-15 11:17:46 +01:00
Jonas Zohren
77f4b68c8e
fix(ci): Also create versioned docker image 2022-02-15 11:17:32 +01:00
Timo Kösters
6602f6114c
fix: redacts can't error anymore 2022-02-13 15:47:58 +01:00
Jonas Platte
3aece38e9d
Add a not-found route 2022-02-13 13:59:27 +01:00
Timo Kösters
9cfef51af3 Merge branch 'more-paths' into 'next'
Take advantage of multiple paths

See merge request famedly/conduit!305
2022-02-13 12:13:22 +00:00
Jonas Platte
aee6bf7e7a Change this to handler 2022-02-13 11:30:04 +00:00
Jonathan de Jong
b8d92d3cec take advantage of multiple paths 2022-02-13 12:07:00 +01:00
Timo Kösters
0c4b42ac13 Merge branch 'parse-pdu-command-panic' into 'next'
fix: do not panic on a JSON not containing the PDU

Closes #236

See merge request famedly/conduit!304
2022-02-12 21:22:37 +00:00
Timo Kösters
91d5fbd56c Merge branch 'up-ruma' into 'next'
Update ruma

See merge request famedly/conduit!303
2022-02-12 20:59:07 +00:00
M0dEx
d4217007fe
fix: do not panic on a JSON not containing the PDU
Do not panic on a JSON not containing the PDU when executing the parse-pdu admin command.
2022-02-12 21:40:07 +01:00
Jonathan de Jong
35b82d51cf fix compilations 2022-02-12 21:04:38 +01:00
Jonathan de Jong
0ed1e42aed update ruma 2022-02-12 21:01:53 +01:00
Timo Kösters
2b644ef7b7 Merge branch 'tracing-cleanup' into 'next'
Remove unnecessary tracing::instrument attributes

See merge request famedly/conduit!302
2022-02-12 15:50:24 +00:00
Jonas Platte
0ad6eac4f8
Remove all tracing::instrument attributes from database::abstraction::* 2022-02-12 16:38:47 +01:00
Jonas Platte
accdb77315
Clean up tracing::instrument attributes
Remove it from request handler since there's already the context of the
request path, added through TraceLayer.
2022-02-12 16:38:47 +01:00
Timo Kösters
914152fcbd Merge branch 'syncfast' into 'next'
improvement: faster /syncs

Closes #231

See merge request famedly/conduit!297
2022-02-12 15:11:03 +00:00
Timo Kösters
2a00c547a1
improvement: faster /syncs 2022-02-12 15:57:54 +01:00
Jonas Platte
adeb8ee425
Remove no-op conversions 2022-02-12 15:03:07 +01:00
Jonas Platte
d74074ad53
Remove tracing::instrument attribute from util functions
They don't ever log anything, so the extra context is never used.
2022-02-12 15:01:28 +01:00
Timo Kösters
41d3da245e Merge branch 'update_turn_readme' into 'next'
Update turn readme

See merge request famedly/conduit!292
2022-02-12 13:04:07 +00:00
Timo Kösters
0565b5a6c8 Merge branch 'show-dns-setup-error' into 'next'
Display actual error message from TokioAsyncResolver, if any

See merge request famedly/conduit!296
2022-02-12 13:01:41 +00:00
Timo Kösters
f3502beb94 Merge branch 'welcome-message-command-hint' into 'next'
feat: add welcome message command hint

See merge request famedly/conduit!299
2022-02-12 12:28:53 +00:00
Timo Kösters
d6b9874b35 Merge branch 'fix-admin-self-commands' into 'next'
Fix admin room processing commands from its own messages

See merge request famedly/conduit!293
2022-02-12 12:27:57 +00:00
Timo Kösters
1d01e2a077 Merge branch 'axum' into 'next'
Port from rocket to axum

See merge request famedly/conduit!263
2022-02-12 12:22:41 +00:00
Jonas Platte
ce714cfd07
Bump version 2022-02-12 13:20:55 +01:00
Jonas Platte
50b24b37c2
Upgrade Ruma 2022-02-12 12:56:18 +01:00
Jonas Platte
9db0473ed5
Improve error messages in Ruma wrapper FromRequest impl 2022-02-12 12:56:08 +01:00
Jonas Platte
5d8c80b170
Strip quotes from X-Matrix fields 2022-02-12 12:56:08 +01:00
Jonas Platte
21ae63d46b
Rewrite query parameter parsing 2022-02-12 12:56:08 +01:00
Jonas Platte
c8951a1d9c
Use axum-server for direct TLS support 2022-02-12 12:56:08 +01:00
Jonas Platte
5fa9190117
Simplify return type of most route handlers 2022-02-12 12:56:08 +01:00
Jonas Platte
77a87881c9
Add message to unsupported HTTP method panic 2022-02-12 12:56:08 +01:00
Jonas Platte
7bf538f549
Fix axum route conflicts 2022-02-12 12:56:07 +01:00
Jonas Platte
a5757ab195
Generalize RumaHandler 2022-02-12 12:56:07 +01:00
Jonas Platte
d1d2217019
Clean up error handling for server_server::get_server_keys_route 2022-02-12 12:56:07 +01:00
Jonas Platte
1f7b3fa4ac
Port from Rocket to axum 2022-02-12 12:56:07 +01:00
Timo Kösters
8709c3ae7b Merge branch 'refactor2' into 'next'
Remove unnecessary uses of event enums

See merge request famedly/conduit!300
2022-02-12 11:50:23 +00:00
Jonas Platte
5db4c001d1
Remove another unnecessary use of an event enum 2022-02-12 01:58:47 +01:00
Jonas Platte
583ec51f9f
Remove unnecessary use of event enum 2022-02-12 01:58:47 +01:00
M0dEx
f602d32aaa
feat: add the actual server name to the welcome message 2022-02-11 18:51:28 +01:00
M0dEx
a6976e6d2d
feat: add 'available' to the help command line in the welcome message 2022-02-11 18:40:51 +01:00
M0dEx
f2b8aa28f3
feat: add a line with the help command to the welcome message 2022-02-11 18:26:56 +01:00
Andrej Kacian
bfbefb0cd2 Display actual error message from TokioAsyncResolver, if any 2022-02-07 12:56:44 +01:00
Andrei Vasiliu
31918bb990 Fix admin room processing commands from its own messages 2022-02-05 08:57:15 +02:00
Torsten Flammiger
f110b5710a Move appservice howto into whats-next; again, rename placeholder TURN url 2022-02-04 21:11:50 +01:00
Torsten Flammiger
1cc0b55650 Resolve merge conflict 2022-02-04 19:28:57 +01:00
Torsten Flammiger
63a2c6cce5 Add new TURN Readme and reference it from DEPLOY.md 2022-02-04 19:11:29 +01:00
Jonas Zohren
faa0cdb595 Merge branch 'next' into 'master'
CI-Hotfix to master

See merge request famedly/conduit!290
2022-02-04 18:00:01 +00:00
Jonas Zohren
0cec421930 Merge branch 'ci-hotfix-sytest-on-master' into 'next'
fix(ci): Always build debug version for sytest

See merge request famedly/conduit!289
2022-02-04 17:51:57 +00:00
Jonas Zohren
826b077e21
fix(ci): Always build debug version for sytest 2022-02-04 18:43:13 +01:00
Timo Kösters
9c8c784fe7 Merge branch 'next' into 'master'
Release 0.3

See merge request famedly/conduit!288
2022-02-04 17:37:26 +00:00
Jonas Zohren
4dcc080ad9 Merge branch 'pre-release-doc-changes' into 'next'
Pre-0.3 doc adjustments

See merge request famedly/conduit!287
2022-02-04 17:12:33 +00:00
Timo Kösters
d55992dc83 Merge branch 'jemallocfeature' into 'next'
feat: allow disabling jemalloc via feature

See merge request famedly/conduit!285
2022-02-04 17:08:03 +00:00
Jonas Zohren
103dc7e09b
Pre-0.3 doc adjustments 2022-02-04 18:05:24 +01:00
Timo Kösters
dffa5570e7 Merge branch 'emptysearchcrash' into 'next'
fix: crash on empty search

Closes #190

See merge request famedly/conduit!286
2022-02-04 16:42:56 +00:00
Timo Kösters
dd03608f17
use our own reqwest fork 2022-02-04 17:24:45 +01:00
Timo Kösters
eb0b2c429f
fix: crash on empty search 2022-02-04 17:15:52 +01:00
Timo Kösters
8d8edddb2e
feat: allow disabling jemalloc via feature 2022-02-04 17:00:46 +01:00
Timo Kösters
f35ad27627 Merge branch 'contextfix' into 'next'
fix: lazy loading for /context

See merge request famedly/conduit!284
2022-02-04 13:45:57 +00:00
Timo Kösters
72cd52e57c
fix: lazy loading for /context 2022-02-04 13:33:04 +01:00
Timo Kösters
8db7d2c025 Merge branch 'asonix/encourage-reqwest-reuse' into 'next'
Re-use a basic reqwest client in all possible cases

See merge request famedly/conduit!265
2022-02-04 11:27:41 +00:00
Timo Kösters
51cca1a60f Merge branch 'create-admin-room' into 'next'
Create admin room and hide migration messages on first run

Closes #157 and #225

See merge request famedly/conduit!282
2022-02-03 21:09:21 +00:00
Andrei Vasiliu
e1c0dcb6bb Create admin room and hide migration messages on first run 2022-02-03 22:50:11 +02:00
Timo Kösters
86a9ec9f44 Merge branch 'up-ruma' into 'next'
Upgrade Ruma

See merge request famedly/conduit!281
2022-02-03 19:41:47 +00:00
Jonas Platte
d23d6fbb37
Upgrade Ruma 2022-02-03 20:24:02 +01:00
Timo Kösters
2d9c5791a6 Merge branch 'rocket-config' into 'next'
Remove mutation from default_config and set default log_level to off

See merge request famedly/conduit!280
2022-02-03 19:20:49 +00:00
Jonas Platte
92571d961f
Remove mutation from default_config and set default log_level to off 2022-02-03 19:55:54 +01:00
Timo Kösters
7a388f4d72 Merge branch 'command-refactor' into 'next'
Move and refactor admin commands into admin module

See merge request famedly/conduit!253
2022-02-03 18:43:54 +00:00
Andrei Vasiliu
b56efcdc2a Merge remote-tracking branch 'origin/next' into command-refactor
Fixed a small conflict in admin.rs
2022-02-03 20:31:06 +02:00
Andrei Vasiliu
6399a7fe4e Remove dash from admin command help 2022-02-03 20:21:04 +02:00
Timo Kösters
79345dc2a6 Merge branch 'refactor' into 'next'
Some refactorings

See merge request famedly/conduit!279
2022-02-03 12:50:55 +00:00
Jonas Platte
974c10e739
Move Config out of database module 2022-02-03 13:30:04 +01:00
Jonas Platte
ce60fc6859
Stop using set_env to configure tracing-subscriber 2022-02-03 13:24:28 +01:00
Jonas Platte
abb4b4cf0b
Remove TryFrom, TryInto imports
They are no longer needed in the 2021 edition.
2022-02-03 13:24:04 +01:00
Andrei Vasiliu
4bbff69a24 Merge remote-tracking branch 'origin/next' into command-refactor
Fixed conflict with commit 78502aa6b1
2022-02-03 13:12:55 +02:00
Timo Kösters
b4755ba15b Merge branch 'tests' into 'next'
Bug fixes

See merge request famedly/conduit!278
2022-02-03 10:12:04 +00:00
Timo Kösters
9ef3abacd4
fix: initial state deserialize->serialize error 2022-02-03 10:57:54 +01:00
Andrei Vasiliu
87225e70c3 Parse admin command body templates from doc comments 2022-02-02 21:35:57 +02:00
Jonas Zohren
510a44699d Merge branch 'fix-healthcheck-no-port' into 'next'
Docker: Healtcheck and build fixes

Closes #222

See merge request famedly/conduit!277
2022-02-02 14:07:49 +00:00
Jonas Zohren
c4733676cf Apply feedback from Ticho 2022-02-02 13:35:15 +00:00
Jonas Zohren
e5bac5e4f5
fix: Running in Docker 2022-02-02 14:07:35 +01:00
Jonas Zohren
9478c75f9d
Use prebuilt CI-containers from https://gitlab.com/jfowl/conduit-containers
Also run all builds on approved MRs
2022-02-02 13:31:28 +01:00
Torsten Flammiger
e24d75cffc
Return the ID of the appservice that was created by register_appservice 2022-02-02 13:29:10 +01:00
Torsten Flammiger
8f69f02e59
add error handling for register_appservice too 2022-02-02 13:29:09 +01:00
Torsten Flammiger
da7b55b39c
Cleanup appservice events after removing the appservice 2022-02-02 13:29:09 +01:00
user
bfcf2db497
fix: mention dependencies to build from source 2022-02-02 13:29:09 +01:00
Timo Kösters
a5f004d7e9
fix: signature mismatch on odd send_join servers 2022-02-02 13:25:31 +01:00
Jonas Zohren
004dfdeaac Merge branch 'ci-test-using-prebuilt-images' into 'next'
Use custom container images for CI

Closes #221

See merge request famedly/conduit!273
2022-02-01 23:51:38 +00:00
Jonas Zohren
fa4099b138 Use prebuilt CI-containers from https://gitlab.com/jfowl/conduit-containers
Also run all builds on approved MRs
2022-02-01 23:51:38 +00:00
Timo Kösters
caf9834e50
feat: cache capacity modifier 2022-02-01 14:42:13 +01:00
Timo Kösters
23aecb78c7
fix: use to_lowercase on /register/available username 2022-01-31 15:40:31 +01:00
Timo Kösters
e17bbdd42d
tests 2022-01-31 14:49:00 +01:00
Timo Kösters
95ca43e20f Merge branch 'cleanup_events_after_unregister_appservice' into 'next'
Cleanup events after unregister appservice / update appservice error handling / return ID of new appservice

See merge request famedly/conduit!276
2022-01-31 13:24:01 +00:00
Torsten Flammiger
28d3b348d2 Return the ID of the appservice that was created by register_appservice 2022-01-31 11:52:33 +01:00
Torsten Flammiger
78502aa6b1 add error handling for register_appservice too 2022-01-31 10:07:49 +01:00
Torsten Flammiger
cc13112592 Cleanup appservice events after removing the appservice 2022-01-31 09:27:31 +01:00
Andrei Vasiliu
677f044d13 Refactor admin code to always defer command processing 2022-01-31 00:00:05 +02:00
Timo Kösters
fb2a7ebf66 Merge branch 'build-dependencies-deploy' into 'next'
Mention dependencies to build from source

See merge request famedly/conduit!275
2022-01-29 08:35:54 +00:00
user
8ff95a5a48 fix: mention dependencies to build from source 2022-01-28 22:26:56 -08:00
Jonas Zohren
401b88d16d
fix: Healtcheck use netstat for port as fallback 2022-01-28 23:23:58 +01:00
Jonas Zohren
8c1cf733b5 Merge branch 'fix-healthcheck-no-port' into 'next'
fix: Use default port for healthcheck as fallback

Closes #222

See merge request famedly/conduit!274
2022-01-28 21:48:25 +00:00
Jonas Zohren
44f7a85077
fix: Use default port for healthcheck as fallback
Conduit can start without a specific port being configured.
This adjusts the healthcheck script to tolerate that state.

Closes https://gitlab.com/famedly/conduit/-/issues/222
2022-01-28 22:33:49 +01:00
Aode (lion)
b39ddf7be9 Rename reqwest clients, mention cheap client clones in comment 2022-01-28 12:42:47 -06:00
Jonas Platte
296c68c0cd Merge branch 'get-thumbnail-mxc-as-ref' into 'next'
Do not copy mxc string unnecessarily in db.get_thumbnail()

See merge request famedly/conduit!272
2022-01-27 17:14:42 +00:00
Andrej Kacian
529e88c7f9 Do not copy mxc string unnecessarily in db.get_thumbnail() 2022-01-27 17:47:09 +01:00
Aode (lion)
1059f35fdc use pre-constructed client for well-known requests also 2022-01-27 10:37:04 -06:00
Aode (Lion)
f8d1c1a8af Re-use a basic request in all possible cases 2022-01-27 10:37:04 -06:00
Timo Kösters
20006c91af Merge branch 'media-download-followup' into 'next'
Media download followup

See merge request famedly/conduit!271
2022-01-27 16:24:15 +00:00
Andrej Kacian
0f6d232cb1 Style fixes from 'cargo fmt' 2022-01-27 17:13:33 +01:00
Andrej Kacian
ccfc243c2c Make get_remote_content() return Result instead of ConduitResult 2022-01-27 17:13:07 +01:00
Timo Kösters
f7148def90 Merge branch 'up-ruma' into 'next'
Upgrade Ruma

See merge request famedly/conduit!268
2022-01-27 15:46:00 +00:00
Timo Kösters
63309e52f8 Merge branch 'media-download-with-filename' into 'next'
Media download with filename

See merge request famedly/conduit!266
2022-01-27 15:44:56 +00:00
Andrej Kacian
c4317a7a96 Reduce code duplication in media download route handlers 2022-01-27 16:32:19 +01:00
Jonas Platte
9c2000cb89
Upgrade Ruma 2022-01-27 16:25:42 +01:00
Andrej Kacian
52873c88b7 Fix incorrect HTTP method in doc comments of two media routes 2022-01-27 00:31:44 +01:00
Andrej Kacian
8472eff277 Implement media download with custom filename 2022-01-27 00:31:44 +01:00
Jonas Zohren
ba8d5abb67 Merge branch 'fix/sccache' into 'next'
Fix CI cargo-cross caching with sccache

See merge request famedly/conduit!264
2022-01-26 18:54:13 +00:00
Maxim De Clercq
ff16729976
fix: correct RUSTC_WRAPPER path in cross container 2022-01-26 00:02:03 +01:00
Maxim De Clercq
acf1585fc3
fix: make sure that libatomic is linked statically 2022-01-24 11:45:07 +01:00
Maxim De Clercq
067fcfc0e4
fix: remove trailing slash from shared path 2022-01-23 22:51:02 +01:00
Maxim De Clercq
77ad4cb8f8
fix: use readelf for checking static compilation 2022-01-23 22:51:02 +01:00
Maxim De Clercq
64c25ea4a1
fix: always print ELF information 2022-01-23 18:38:33 +01:00
Maxim De Clercq
c7560b3502
fix: remove libgcc dependency in ci builds since the binary is ensured to be statically compiled 2022-01-23 18:09:14 +01:00
Maxim De Clercq
c2ad2b3dd7
fix: pass sccache variables to cross container with build.env.passthrough 2022-01-23 17:43:28 +01:00
Maxim De Clercq
219dfbabd5
fix: pass RUSTC_WRAPPER to the cross container and enforce static builds 2022-01-23 17:31:12 +01:00
Jonas Zohren
4a34d757d7 Merge branch 'fix/rocksdb-cross-compiling' into 'next'
Fix cross-compiling for RocksDB

Closes #213

See merge request famedly/conduit!261
2022-01-23 15:58:27 +00:00
Maxim De Clercq
fd67cd7450
feat: support targetting i686 2022-01-23 15:58:19 +01:00
Maxim De Clercq
cd9902637d
feat: use rustembedded/cross images and use static relocation model to fix cross-compile 2022-01-23 14:41:39 +01:00
Andrei Vasiliu
7505548b94 Merge remote-tracking branch 'refs/remotes/origin/next' into command-refactor
Resolved conflict for the new list_local_users command
2022-01-22 14:29:50 +02:00
Timo Kösters
f50bdb6010 Merge branch 'list_local_users' into 'next'
Implement list_local_users command

See merge request famedly/conduit!260
2022-01-22 09:33:32 +00:00
Maxim De Clercq
a021680591
fix: make sure libatomic is always linked because it's skipped on arm targets 2022-01-22 01:14:36 +01:00
Maxim De Clercq
3e9abfedb4
fix: make sure libstdc++ is linked statically when cross-compiling 2022-01-22 00:14:19 +01:00
Timo Kösters
b634f9d45c Merge branch 'reqwestfix' into 'next'
improvement: use jemalloc for lower memory usage

See merge request famedly/conduit!262
2022-01-21 16:54:35 +00:00
Timo Kösters
f88523988e
improvement: use jemalloc for lower memory usage 2022-01-21 17:54:05 +01:00
Maxim De Clercq
bfef94f5f4
fix: linking against libatomic is no longer required since the library path is fixed 2022-01-21 17:26:25 +01:00
Maxim De Clercq
d94f3c1e9a
fix: make sure cc-rs and bindgen use the correct paths when cross-compiling 2022-01-21 17:06:15 +01:00
Timo Kösters
4ef995cf7d Merge branch 'next' into 'next'
Add heisenbridge to tested appservices

See merge request famedly/conduit!250
2022-01-21 15:43:40 +00:00
Reiner Herrmann
97d56af5bd Add heisenbridge to tested appservices 2022-01-21 16:40:03 +01:00
Andrei Vasiliu
57979da28c Change structopt to clap, remove markdown dependency 2022-01-21 17:35:26 +02:00
Timo Kösters
58da67e59e Merge branch 'mautrix-signal-support' into 'next'
Add mautrix-signal to tested appservices

See merge request famedly/conduit!251
2022-01-21 15:33:10 +00:00
Timo Kösters
5d3ba5c628 Merge branch 'WIP_persy_batch_next' into 'next'
feat: Integration with persy using background ops

See merge request famedly/conduit!231
2022-01-21 15:31:46 +00:00
Torsten Flammiger
960ba8bd99 Merged current next 2022-01-21 14:32:59 +01:00
Torsten Flammiger
ba6d72f3f9 Reformatted 2022-01-21 14:28:07 +01:00
Andrei Vasiliu
cc3ef1a8be Improve help text for admin commands 2022-01-21 11:13:24 +02:00
Andrei Vasiliu
f244c0e2ce Merge remote-tracking branch 'refs/remotes/origin/next' into command-refactor 2022-01-21 10:19:17 +02:00
Andrei Vasiliu
e378bc4a2c Refactor admin commands to use structopt 2022-01-21 10:17:50 +02:00
Timo Kösters
ab4f3bd06c Merge branch 'lib-main' into 'next'
Clean up mod and use statements in lib.rs and main.rs

See merge request famedly/conduit!258
2022-01-20 12:32:39 +00:00
Jonas Platte
8d81c1c072
Use MSRV for build CI jobs
The test job will use the latest stable so all stable lints are included.
2022-01-20 13:23:58 +01:00
Jonas Platte
6bb1081b71
Use BTreeMap::into_values
Stable under new MSRV.
2022-01-20 13:19:51 +01:00
Jonas Platte
ff5fec9e74
Raise minimum supported Rust version to 1.56 2022-01-20 13:19:51 +01:00
Jonas Platte
5afb27a5a9
Use latest stable for Docker image 2022-01-20 12:29:24 +01:00
Jonas Platte
6e322716ca
Delete rust-toolchain file 2022-01-20 12:29:10 +01:00
Jonas Platte
756a41f22d
Fix rustc / clippy warnings 2022-01-20 00:10:39 +01:00
Jonas Platte
a0fc5eba72
Remove unnecessary Result 2022-01-19 23:57:22 +01:00
Timo Kösters
cc0f094ff7 Merge branch 'rocksdbbreaks' into 'next'
Rocksdb breaking change. If your server breaks, come to #conduit:fachschaften.org

See merge request famedly/conduit!259
2022-01-19 06:17:57 +00:00
Timo Kösters
d4eb3e3295
fix: rocksdb does not use zstd compression unless we disable everything else 2022-01-19 07:09:25 +01:00
Jonas Platte
c6277c72a1
Fix warnings in database::abstraction 2022-01-18 21:05:40 +01:00
Jonas Platte
13a48c4577
Clean up mod and use statements in lib.rs and main.rs 2022-01-18 21:04:44 +01:00
Timo Kösters
b2ffc4e496 Merge branch 'maxopenfiles' into 'next'
Server ACL support and more config options

Closes #67

See merge request famedly/conduit!248
2022-01-18 09:05:57 +00:00
Timo Kösters
53de350908
fix: less load when lazy loading 2022-01-17 23:24:27 +01:00
Torsten Flammiger
fd6427a83f Update/Revert code comment 2022-01-17 22:34:34 +01:00
Torsten Flammiger
fc39b3447c Little bit of refactoring 2022-01-17 19:43:45 +01:00
Torsten Flammiger
4aefc29650 Merge branch 'list_local_users_test' into list_local_users 2022-01-17 19:20:11 +01:00
Timo Kösters
03b174335c
improvement: lower default pdu cache capacity 2022-01-17 14:46:53 +01:00
Timo Kösters
8c90e7adfb
refactor: fix warnings 2022-01-17 14:46:03 +01:00
Timo Kösters
ee8e72f7a8
feat: implement server ACLs 2022-01-17 14:35:38 +01:00
Jonas Zohren
24aa034e48 Merge branch 'ci-fix-cargo-test-missing-libclang' into 'next'
CI: Fix cargo-test

See merge request famedly/conduit!255
2022-01-16 20:57:23 +00:00
Jonas Zohren
10f1da12bf CI: Fix cargo-test 2022-01-16 20:57:23 +00:00
Torsten Flammiger
50430cf4ab Name function after command: list_local_users 2022-01-16 21:22:57 +01:00
Torsten Flammiger
52284ef9e2 Add some debug/info if user was found 2022-01-16 20:25:16 +01:00
Torsten Flammiger
3e79d15495 Updated function documentation 2022-01-16 20:15:53 +01:00
Andrei Vasiliu
13ae036ca0 Move and refactor admin commands into admin module 2022-01-16 13:52:23 +02:00
Torsten Flammiger
9205c07048 Update get_local_users description 2022-01-15 22:37:39 +01:00
Torsten Flammiger
c03bf6ef11 name the function after its purpose: iter_locals -> get_local_users 2022-01-15 22:20:51 +01:00
Julius de Bruijn
217e378992 Add mautrix-signal to tested appservices 2022-01-15 17:34:13 +00:00
Torsten Flammiger
91eb6c4d08 Return a Result instead of a vector 2022-01-15 17:10:23 +01:00
Torsten Flammiger
fb19114bd9 rename iter_locals to get_local_users; make get_local_users skip on parse errors; remove deprecated function count_local_users 2022-01-15 15:52:47 +01:00
Tglman
c1cd4b5e26 chore: set the released version of persy in Cargo.toml 2022-01-15 14:17:15 +00:00
Tglman
f9977ca64f fix: changes to update to the last database engine trait definition 2022-01-15 14:17:15 +00:00
Tglman
1cc41937bd refactor:use generic watcher in persy implementation 2022-01-15 14:17:15 +00:00
Tglman
ab15ec6c32 feat: Integration with persy using background ops 2022-01-15 14:17:15 +00:00
Timo Kösters
d434dfb3a5
feat: config option for rocksdb max open files 2022-01-14 11:44:20 +01:00
Timo Kösters
5b8d2a736e Merge branch 'default' into 'next'
improvement: better default cache capacity

See merge request famedly/conduit!247
2022-01-14 10:44:06 +00:00
Timo Kösters
80e51986c4
improvement: better default cache capacity 2022-01-14 11:08:31 +01:00
Jonas Zohren
8fc51f0029 Merge branch 'ci-cargo-home-workaround' into 'next'
Fix(ci): Disable CARGO_HOME caching

See merge request famedly/conduit!246
2022-01-13 22:24:47 +00:00
Jonas Zohren
f67785caaf Fix(ci): Disable CARGO_HOME caching 2022-01-13 22:24:47 +00:00
Timo Kösters
1119c2f510 Merge branch 'rocksdb' into 'next'
feat: rocksdb backend

See merge request famedly/conduit!217
2022-01-13 22:12:51 +00:00
Timo Kösters
16f826773b
refactor: fix warnings 2022-01-13 22:55:35 +01:00
Timo Kösters
6fa01aa982
fix: remove dbg 2022-01-13 22:44:27 +01:00
Timo Kösters
a336027b0e
fix: better memory usage message 2022-01-13 22:44:27 +01:00
Timo Kösters
447639054e
improvement: higher default pdu capacity 2022-01-13 22:44:27 +01:00
Timo Kösters
9e77f7617c
fix: disable direct IO again 2022-01-13 22:44:27 +01:00
Timo Kösters
7f27af032b
improvement: optimize rocksdb for spinning disks 2022-01-13 22:44:26 +01:00
Timo Kösters
b96822b617
fix: use db options for column families too 2022-01-13 22:44:26 +01:00
Timo Kösters
0bb7d76dec
improvement: rocksdb configuration 2022-01-13 22:44:26 +01:00
Timo Kösters
077e9ad438
improvement: memory usage for caches 2022-01-13 22:44:25 +01:00
Andrej Kacian
68ee1a5408
Add rocksdb implementation of memory_usage() 2022-01-13 22:42:25 +01:00
Andrej Kacian
ff243870f8
Add "database_memory_usage" AdminCommand 2022-01-13 22:42:24 +01:00
Andrej Kacian
71431f330a
Add memory_usage() to DatabaseEngine trait 2022-01-13 22:42:24 +01:00
Timo Kösters
fa6d7f7ccd
feat: database backend selection at runtime 2022-01-13 22:42:22 +01:00
Timo Kösters
4f39d36e98
docs: lazy loading 2022-01-13 22:38:52 +01:00
Timo Kösters
c6d88359d7
fix: incremental lazy loading 2022-01-13 22:38:52 +01:00
Timo Kösters
f285c89006
fix: make incremental sync efficient again 2022-01-13 22:38:52 +01:00
Timo Kösters
93d225fd1e
improvement: faster way to load required state 2022-01-13 22:38:52 +01:00
Timo Kösters
1bd9fd74b3
feat: partially support sync filters 2022-01-13 22:38:52 +01:00
Timo Kösters
68e910bb77
feat: lazy loading 2022-01-13 22:38:50 +01:00
Timo Kösters
5bcc1324ed
fix: auth event fetch order 2022-01-13 22:29:19 +01:00
Timo Kösters
54f4d39e3e
improvement: don't fetch event multiple times 2022-01-13 22:29:17 +01:00
Timo Kösters
b1d9ec3efc
fix: atomic increment 2022-01-13 22:28:18 +01:00
Timo Kösters
ee3d2db8e0
improvement, maybe not safe 2022-01-13 22:10:51 +01:00
Timo Kösters
83a9095cdc
fix? 2022-01-13 22:10:51 +01:00
Timo Kösters
74951cb239
dbg 2022-01-13 22:10:51 +01:00
Timo Kösters
4b4afea2ab
fix auth event fetching 2022-01-13 22:10:51 +01:00
Timo Kösters
c9c9974641
fix: stack overflows when fetching auth events 2022-01-13 22:10:50 +01:00
Timo Kösters
a30b588ede
rocksdb as default 2022-01-13 22:10:50 +01:00
Timo Kösters
1d647a1a9a
improvement: allow rocksdb again 2022-01-13 22:10:43 +01:00
Timo Kösters
b25354c747 Merge branch 'add_remove_appservice' into 'next'
Add ability to remove an appservice

See merge request famedly/conduit!236
2022-01-13 11:38:17 +00:00
Torsten Flammiger
eecd664c43 Reformat code 2022-01-13 12:26:23 +01:00
Timo Kösters
f3ea2df9fe Merge branch 'simpler-traefik-nginx' into 'next'
Make traefik+nginx config more self-contained

See merge request famedly/conduit!239
2022-01-13 11:18:15 +00:00
Timo Kösters
fbcbadf265 Merge branch 'rust-1.53' into 'next'
Restore compatibility with Rust 1.53

See merge request famedly/conduit!244
2022-01-13 11:09:14 +00:00
Jonas Platte
bcf4ede0bc
Restore compatibility with Rust 1.53 2022-01-13 12:06:20 +01:00
Timo Kösters
f5d1dda766 Merge branch 'up-ruma' into 'next'
Upgrade Ruma

See merge request famedly/conduit!243
2022-01-13 10:52:13 +00:00
Jonas Platte
84862352ba
Replace to_string calls on string literals with to_owned 2022-01-13 11:48:40 +01:00
Jonas Platte
cf54185a1c
Use struct literals for consistency 2022-01-13 11:48:18 +01:00
Jonas Platte
349865d3cc
Upgrade Ruma 2022-01-13 11:44:23 +01:00
Timo Kösters
2fa8171e79 Merge branch 'ci-use-sccache' into 'next'
CI: Use sccache for caching

Closes #200

See merge request famedly/conduit!232
2022-01-13 10:42:33 +00:00
Timo Kösters
8e12b47df4 Merge branch 'no-passwords-in-db' into 'next'
Do not store uiaa requests in database

See merge request famedly/conduit!219
2022-01-13 10:33:49 +00:00
Timo Kösters
0ec26b7e96 Merge branch 'next' into 'next'
refactor:moved key watch wake logic to specific module

See merge request famedly/conduit!238
2022-01-13 10:27:56 +00:00
Timo Kösters
b32e85ffa8 Merge branch 'up-ruma' into 'next'
Upgrade Ruma

See merge request famedly/conduit!237
2022-01-13 10:24:45 +00:00
Ticho 34782694
b746f17e56 Make traefik+nginx config more self-contained
The nginx instance which is serving the .well-known endpoints can serve
the simple JSON replies directly from memory, instead of having them
as external files on disk.
2022-01-07 13:06:21 +00:00
Torsten Flammiger
8d51359668 Fix typo and remove unneeded newline 2021-12-26 20:49:19 +01:00
Torsten Flammiger
a69eb277d4 Update count users: It's now list_local_users and contains the number and the usernames 2021-12-26 20:00:31 +01:00
Torsten Flammiger
39787b41cb Rename admin command CountUsers -> CountLocalUsers; Update comments 2021-12-26 12:04:38 +01:00
Torsten Flammiger
2281bcefc6 Finalize count_local_users function 2021-12-26 11:06:28 +01:00
Torsten Flammiger
d21030566c Rename/Add count methods to count_local_users 2021-12-25 21:29:03 +01:00
Torsten Flammiger
567cf6dbe9 Add command count_local_users to database/rooms.rs 2021-12-25 20:51:22 +01:00
Torsten Flammiger
7c1b2625cf Prepare to add an option to list local user accounts from your homeserver 2021-12-24 23:06:54 +01:00
Tglman
a889e884e6 refactor:moved key watch wake logic to specific module 2021-12-23 23:17:43 +00:00
Jonas Platte
aba95b20f3
Upgrade Ruma 2021-12-23 17:40:42 +01:00
Moritz Bitsch
c4a438460e Use Box to store UserID and DeviceID
Userid and DeviceID are of unknown size, use Box to be able to store
them into the userdevicesessionid_uiaarequest BTreeMap
2021-12-22 19:26:23 +01:00
Torsten Flammiger
7f2445be6c On unregister_appservice(service_name), remove the appservice service_name from cache too 2021-12-22 16:48:27 +01:00
Torsten Flammiger
b6c9582cf4 Fix doc style comment according to Rust; VSCode added line breaks 2021-12-22 13:09:56 +01:00
Torsten Flammiger
7857da8a0b Add ability to remove an appservice 2021-12-20 15:46:36 +01:00
Moritz Bitsch
720a54b3bb Use String to store UserId for uiaa request
Fixes compilation error after ruma upgrade
2021-12-18 19:05:18 +01:00
Moritz Bitsch
0725b69abb Clean up userdevicesessionid_uiaarequest BTreeMap
There is no need to encode or decode anything as we are not
saving to disk
2021-12-18 18:57:36 +01:00
Moritz Bitsch
fe8cfe0556 Add database migration to remove stored passwords
uiaarequests can contain plaintext passwords, which were stored on disk
2021-12-18 18:57:36 +01:00
Moritz Bitsch
3d25d46dc5 Use simple BTreeMap to store uiaa requests
some uiaa requests contain plaintext passwords which should never be
persisted to disk.

Currently there is no cleanup implemented (you have to restart conduit)
2021-12-18 18:57:36 +01:00
Timo Kösters
9b57c89df6 Merge branch 'more-event-id-arcs' into 'next'
Use Arc for EventIds in PDUs

See merge request famedly/conduit!229
2021-12-16 13:06:30 +00:00
Jonas Platte
34d3f74f36
Use Arc for EventIds in PDUs
Upgrades Ruma again to make this work.
2021-12-16 13:55:24 +01:00
Timo Kösters
11a21fc136 Merge branch 'up-ruma' into 'next'
Upgrade ruma

See merge request famedly/conduit!228
2021-12-15 14:22:30 +00:00
Jonas Platte
0183d003d0
Revert rename of Ruma<_> parameters 2021-12-15 13:58:25 +01:00
Jonas Platte
f712455047
Reduce EventId copying 2021-12-15 13:00:37 +01:00
Jonas Platte
58ea081762
Use int! macro instead of Int::from 2021-12-15 13:00:37 +01:00
Jonas Platte
bffddbd487
Simplify identifier parsing code 2021-12-15 13:00:37 +01:00
Jonas Platte
41fef1da64
Remove unnecessary .to_string() calls 2021-12-15 13:00:37 +01:00
Jonas Platte
892a0525f2
Upgrade Ruma 2021-12-15 13:00:37 +01:00
Jonas Platte
1fc616320a
Use struct init shorthand 2021-12-15 13:00:37 +01:00
Timo Kösters
14a178d783 Merge branch 'update-docker-base-image' into 'next'
Update docker images

See merge request famedly/conduit!230
2021-12-15 10:14:20 +00:00
Jonas Zohren
339a26f56c Update docker images 2021-12-15 10:14:20 +00:00
Jonas Zohren
adb518fa0d
CI: Use curl instead of wget
The rust docker image already comes with curl, no need to install wget.
2021-12-14 11:16:40 +01:00
Jonas Zohren
f91216dd3c
CI: Optionally use sccache for compilation
This moves compiler caching for incremental builds away from GitLab
caching the whole target/ folder to caching each code unit in S3.
This aleviates the need to zip and unzip and just caches on the fly.

This feature is optional and gated behind the SCCACHE_BIN_URL env
2021-12-14 11:16:02 +01:00
253 changed files with 36289 additions and 22560 deletions

View file

@ -25,4 +25,4 @@ docker-compose*
rustfmt.toml
# Documentation
*.md
#*.md

15
.editorconfig Normal file
View file

@ -0,0 +1,15 @@
# EditorConfig is awesome: https://EditorConfig.org
root = true
[*]
charset = utf-8
end_of_line = lf
tab_width = 4
indent_size = 4
indent_style = space
insert_final_newline = true
max_line_length = 120
[*.nix]
indent_size = 2

5
.envrc Normal file
View file

@ -0,0 +1,5 @@
#!/usr/bin/env bash
use flake
PATH_add bin

15
.gitignore vendored
View file

@ -31,7 +31,6 @@ modules.xml
### vscode ###
.vscode/*
!.vscode/settings.json
!.vscode/tasks.json
!.vscode/launch.json
!.vscode/extensions.json
@ -57,9 +56,21 @@ $RECYCLE.BIN/
*.lnk
# Conduit
Rocket.toml
conduit.toml
conduit.db
# Etc.
**/*.rs.bk
cached_target
# Nix artifacts
/result*
# Direnv cache
/.direnv
# Gitlab CI cache
/.gitlab-ci.d
# mdbook output
public/

View file

@ -1,309 +1,197 @@
stages:
- build
- build docker image
- test
- upload artifacts
- ci
- artifacts
- publish
variables:
GIT_SUBMODULE_STRATEGY: recursive
FF_USE_FASTZIP: 1
CACHE_COMPRESSION_LEVEL: fastest
# --------------------------------------------------------------------- #
# Cargo: Compiling for different architectures #
# --------------------------------------------------------------------- #
.build-cargo-shared-settings:
stage: "build"
needs: []
rules:
- if: '$CI_COMMIT_BRANCH == "master"'
- if: '$CI_COMMIT_BRANCH == "next"'
- if: "$CI_COMMIT_TAG"
interruptible: true
image: "rust:latest"
tags: ["docker"]
cache:
paths:
- cargohome
- target/
key: "build_cache--$TARGET--$CI_COMMIT_BRANCH--release"
variables:
CARGO_PROFILE_RELEASE_LTO: "true"
CARGO_PROFILE_RELEASE_CODEGEN_UNITS: "1"
before_script:
- 'echo "Building for target $TARGET"'
- 'mkdir -p cargohome && CARGOHOME="cargohome"'
- "rustc --version && cargo --version && rustup show" # Print version info for debugging
- "rustup target add $TARGET"
script:
- time cargo build --target $TARGET --release
- 'cp "target/$TARGET/release/conduit" "conduit-$TARGET"'
artifacts:
expire_in: never
build:release:cargo:x86_64-unknown-linux-musl-with-debug:
extends: .build-cargo-shared-settings
image: messense/rust-musl-cross:x86_64-musl
variables:
CARGO_PROFILE_RELEASE_DEBUG: 2 # Enable debug info for flamegraph profiling
TARGET: "x86_64-unknown-linux-musl"
after_script:
- "mv ./conduit-x86_64-unknown-linux-musl ./conduit-x86_64-unknown-linux-musl-with-debug"
artifacts:
name: "conduit-x86_64-unknown-linux-musl-with-debug"
paths:
- "conduit-x86_64-unknown-linux-musl-with-debug"
expose_as: "Conduit for x86_64-unknown-linux-musl-with-debug"
build:release:cargo:x86_64-unknown-linux-musl:
extends: .build-cargo-shared-settings
image: messense/rust-musl-cross:x86_64-musl
variables:
TARGET: "x86_64-unknown-linux-musl"
artifacts:
name: "conduit-x86_64-unknown-linux-musl"
paths:
- "conduit-x86_64-unknown-linux-musl"
expose_as: "Conduit for x86_64-unknown-linux-musl"
build:release:cargo:arm-unknown-linux-musleabihf:
extends: .build-cargo-shared-settings
image: messense/rust-musl-cross:arm-musleabihf
variables:
TARGET: "arm-unknown-linux-musleabihf"
artifacts:
name: "conduit-arm-unknown-linux-musleabihf"
paths:
- "conduit-arm-unknown-linux-musleabihf"
expose_as: "Conduit for arm-unknown-linux-musleabihf"
build:release:cargo:armv7-unknown-linux-musleabihf:
extends: .build-cargo-shared-settings
image: messense/rust-musl-cross:armv7-musleabihf
variables:
TARGET: "armv7-unknown-linux-musleabihf"
artifacts:
name: "conduit-armv7-unknown-linux-musleabihf"
paths:
- "conduit-armv7-unknown-linux-musleabihf"
expose_as: "Conduit for armv7-unknown-linux-musleabihf"
build:release:cargo:aarch64-unknown-linux-musl:
extends: .build-cargo-shared-settings
image: messense/rust-musl-cross:aarch64-musl
variables:
TARGET: "aarch64-unknown-linux-musl"
artifacts:
name: "conduit-aarch64-unknown-linux-musl"
paths:
- "conduit-aarch64-unknown-linux-musl"
expose_as: "Conduit for aarch64-unknown-linux-musl"
.cargo-debug-shared-settings:
extends: ".build-cargo-shared-settings"
rules:
- if: '$CI_COMMIT_BRANCH != "master"'
cache:
key: "build_cache--$TARGET--$CI_COMMIT_BRANCH--debug"
script:
- "time cargo build --target $TARGET"
- 'mv "target/$TARGET/debug/conduit" "conduit-debug-$TARGET"'
artifacts:
expire_in: 4 weeks
build:debug:cargo:x86_64-unknown-linux-musl:
extends: ".cargo-debug-shared-settings"
image: messense/rust-musl-cross:x86_64-musl
variables:
TARGET: "x86_64-unknown-linux-musl"
artifacts:
name: "conduit-debug-x86_64-unknown-linux-musl"
paths:
- "conduit-debug-x86_64-unknown-linux-musl"
expose_as: "Conduit DEBUG for x86_64-unknown-linux-musl"
# --------------------------------------------------------------------- #
# Create and publish docker image #
# --------------------------------------------------------------------- #
.docker-shared-settings:
stage: "build docker image"
image: jdrouet/docker-with-buildx:stable
tags: ["docker"]
services:
- docker:dind
needs:
- "build:release:cargo:x86_64-unknown-linux-musl"
- "build:release:cargo:arm-unknown-linux-musleabihf"
- "build:release:cargo:armv7-unknown-linux-musleabihf"
- "build:release:cargo:aarch64-unknown-linux-musl"
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_TLS_CERTDIR: ""
DOCKER_DRIVER: overlay2
PLATFORMS: "linux/arm/v6,linux/arm/v7,linux/arm64,linux/amd64"
DOCKER_FILE: "docker/ci-binaries-packaging.Dockerfile"
cache:
paths:
- docker_cache
key: "$CI_JOB_NAME"
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
# Only log in to Dockerhub if the credentials are given:
- if [ -n "${DOCKER_HUB}" ]; then docker login -u "$DOCKER_HUB_USER" -p "$DOCKER_HUB_PASSWORD" "$DOCKER_HUB"; fi
script:
# Prepare buildx to build multiarch stuff:
- docker context create 'ci-context'
- docker buildx create --name 'multiarch-builder' --use 'ci-context'
# Copy binaries to their docker arch path
- mkdir -p linux/ && mv ./conduit-x86_64-unknown-linux-musl linux/amd64
- mkdir -p linux/arm/ && mv ./conduit-arm-unknown-linux-musleabihf linux/arm/v6
- mkdir -p linux/arm/ && mv ./conduit-armv7-unknown-linux-musleabihf linux/arm/v7
- mv ./conduit-aarch64-unknown-linux-musl linux/arm64
- 'export CREATED=$(date -u +''%Y-%m-%dT%H:%M:%SZ'') && echo "Docker image creation date: $CREATED"'
# Build and push image:
- >
docker buildx build
--pull
--push
--cache-from=type=local,src=$CI_PROJECT_DIR/docker_cache
--cache-to=type=local,dest=$CI_PROJECT_DIR/docker_cache
--build-arg CREATED=$CREATED
--build-arg VERSION=$(grep -m1 -o '[0-9].[0-9].[0-9]' Cargo.toml)
--build-arg "GIT_REF=$CI_COMMIT_SHORT_SHA"
--platform "$PLATFORMS"
--tag "$TAG"
--tag "$TAG-alpine"
--tag "$TAG-commit-$CI_COMMIT_SHORT_SHA"
--file "$DOCKER_FILE" .
docker:next:gitlab:
extends: .docker-shared-settings
rules:
- if: '$CI_COMMIT_BRANCH == "next"'
variables:
TAG: "$CI_REGISTRY_IMAGE/matrix-conduit:next"
docker:next:dockerhub:
extends: .docker-shared-settings
rules:
- if: '$CI_COMMIT_BRANCH == "next" && $DOCKER_HUB'
variables:
TAG: "$DOCKER_HUB_IMAGE/matrixconduit/matrix-conduit:next"
docker:master:gitlab:
extends: .docker-shared-settings
rules:
- if: '$CI_COMMIT_BRANCH == "master"'
variables:
TAG: "$CI_REGISTRY_IMAGE/matrix-conduit:latest"
docker:master:dockerhub:
extends: .docker-shared-settings
rules:
- if: '$CI_COMMIT_BRANCH == "master" && $DOCKER_HUB'
variables:
TAG: "$DOCKER_HUB_IMAGE/matrixconduit/matrix-conduit:latest"
# --------------------------------------------------------------------- #
# Run tests #
# --------------------------------------------------------------------- #
test:cargo:
stage: "test"
needs: []
image: "rust:latest"
tags: ["docker"]
variables:
CARGO_HOME: "cargohome"
cache:
paths:
- target
- cargohome
key: test_cache
interruptible: true
before_script:
- mkdir -p $CARGO_HOME && echo "using $CARGO_HOME to cache cargo deps"
- apt-get update -yqq
- apt-get install -yqq --no-install-recommends build-essential libssl-dev pkg-config wget
- rustup component add clippy rustfmt
- wget "https://faulty-storage.de/gitlab-report"
- chmod +x ./gitlab-report
script:
- rustc --version && cargo --version # Print version info for debugging
- cargo fmt --all -- --check
- "cargo test --color always --workspace --verbose --locked --no-fail-fast -- -Z unstable-options --format json | ./gitlab-report -p test > $CI_PROJECT_DIR/report.xml"
- "cargo clippy --color always --verbose --message-format=json | ./gitlab-report -p clippy > $CI_PROJECT_DIR/gl-code-quality-report.json"
artifacts:
when: always
reports:
junit: report.xml
codequality: gl-code-quality-report.json
test:sytest:
stage: "test"
allow_failure: true
needs:
- "build:debug:cargo:x86_64-unknown-linux-musl"
image:
name: "valkum/sytest-conduit:latest"
entrypoint: [""]
tags: ["docker"]
variables:
PLUGINS: "https://github.com/valkum/sytest_conduit/archive/master.tar.gz"
before_script:
- "mkdir -p /app"
- "cp ./conduit-debug-x86_64-unknown-linux-musl /app/conduit"
- "chmod +x /app/conduit"
- "rm -rf /src && ln -s $CI_PROJECT_DIR/ /src"
- "mkdir -p /work/server-0/database/ && mkdir -p /work/server-1/database/ && mkdir -p /work/server-2/database/"
- "cd /"
script:
- "SYTEST_EXIT_CODE=0"
- "/bootstrap.sh conduit || SYTEST_EXIT_CODE=1"
- 'perl /sytest/tap-to-junit-xml.pl --puretap --input /logs/results.tap --output $CI_PROJECT_DIR/sytest.xml "Sytest" && cp /logs/results.tap $CI_PROJECT_DIR/results.tap'
- "exit $SYTEST_EXIT_CODE"
artifacts:
when: always
paths:
- "$CI_PROJECT_DIR/sytest.xml"
- "$CI_PROJECT_DIR/results.tap"
reports:
junit: "$CI_PROJECT_DIR/sytest.xml"
# --------------------------------------------------------------------- #
# Store binaries as package so they have download urls #
# --------------------------------------------------------------------- #
publish:package:
stage: "upload artifacts"
needs:
- "build:release:cargo:x86_64-unknown-linux-musl"
- "build:release:cargo:arm-unknown-linux-musleabihf"
- "build:release:cargo:armv7-unknown-linux-musleabihf"
- "build:release:cargo:aarch64-unknown-linux-musl"
# - "build:cargo-deb:x86_64-unknown-linux-gnu"
rules:
- if: '$CI_COMMIT_BRANCH == "master"'
- if: '$CI_COMMIT_BRANCH == "next"'
- if: "$CI_COMMIT_TAG"
image: curlimages/curl:latest
tags: ["docker"]
variables:
GIT_STRATEGY: "none" # Don't need a clean copy of the code, we just operate on artifacts
script:
- 'BASE_URL="${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/generic/conduit-${CI_COMMIT_REF_SLUG}/build-${CI_PIPELINE_ID}"'
- 'curl --header "JOB-TOKEN: $CI_JOB_TOKEN" --upload-file conduit-x86_64-unknown-linux-musl "${BASE_URL}/conduit-x86_64-unknown-linux-musl"'
- 'curl --header "JOB-TOKEN: $CI_JOB_TOKEN" --upload-file conduit-arm-unknown-linux-musleabihf "${BASE_URL}/conduit-arm-unknown-linux-musleabihf"'
- 'curl --header "JOB-TOKEN: $CI_JOB_TOKEN" --upload-file conduit-armv7-unknown-linux-musleabihf "${BASE_URL}/conduit-armv7-unknown-linux-musleabihf"'
- 'curl --header "JOB-TOKEN: $CI_JOB_TOKEN" --upload-file conduit-aarch64-unknown-linux-musl "${BASE_URL}/conduit-aarch64-unknown-linux-musl"'
# Makes some things print in color
TERM: ansi
# Faster cache and artifact compression / decompression
FF_USE_FASTZIP: true
# Print progress reports for cache and artifact transfers
TRANSFER_METER_FREQUENCY: 5s
# Avoid duplicate pipelines
# See: https://docs.gitlab.com/ee/ci/yaml/workflow.html#switch-between-branch-pipelines-and-merge-request-pipelines
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
- if: "$CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS"
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS
when: never
- if: "$CI_COMMIT_BRANCH"
- if: $CI
before_script:
# Enable nix-command and flakes
- if command -v nix > /dev/null; then echo "experimental-features = nix-command flakes" >> /etc/nix/nix.conf; fi
# Add our own binary cache
- if command -v nix > /dev/null; then echo "extra-substituters = https://attic.conduit.rs/conduit" >> /etc/nix/nix.conf; fi
- if command -v nix > /dev/null; then echo "extra-trusted-public-keys = conduit:ddcaWZiWm0l0IXZlO8FERRdWvEufwmd0Negl1P+c0Ns=" >> /etc/nix/nix.conf; fi
# Add alternate binary cache
- if command -v nix > /dev/null && [ -n "$ATTIC_ENDPOINT" ]; then echo "extra-substituters = $ATTIC_ENDPOINT" >> /etc/nix/nix.conf; fi
- if command -v nix > /dev/null && [ -n "$ATTIC_PUBLIC_KEY" ]; then echo "extra-trusted-public-keys = $ATTIC_PUBLIC_KEY" >> /etc/nix/nix.conf; fi
# Add crane binary cache
- if command -v nix > /dev/null; then echo "extra-substituters = https://crane.cachix.org" >> /etc/nix/nix.conf; fi
- if command -v nix > /dev/null; then echo "extra-trusted-public-keys = crane.cachix.org-1:8Scfpmn9w+hGdXH/Q9tTLiYAE/2dnJYRJP7kl80GuRk=" >> /etc/nix/nix.conf; fi
# Add nix-community binary cache
- if command -v nix > /dev/null; then echo "extra-substituters = https://nix-community.cachix.org" >> /etc/nix/nix.conf; fi
- if command -v nix > /dev/null; then echo "extra-trusted-public-keys = nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs=" >> /etc/nix/nix.conf; fi
# Install direnv and nix-direnv
- if command -v nix > /dev/null; then nix-env -iA nixpkgs.direnv nixpkgs.nix-direnv; fi
# Allow .envrc
- if command -v nix > /dev/null; then direnv allow; fi
# Set CARGO_HOME to a cacheable path
- export CARGO_HOME="$(git rev-parse --show-toplevel)/.gitlab-ci.d/cargo"
# Cache attic client
- if command -v nix > /dev/null; then ./bin/nix-build-and-cache --inputs-from . attic; fi
ci:
stage: ci
image: nixos/nix:2.22.0
script:
# Cache the inputs required for the devShell
- ./bin/nix-build-and-cache .#devShells.x86_64-linux.default.inputDerivation
- direnv exec . engage
cache:
key: nix
paths:
- target
- .gitlab-ci.d
rules:
# CI on upstream runners (only available for maintainers)
- if: $CI_PIPELINE_SOURCE == "merge_request_event" && $IS_UPSTREAM_CI == "true"
# Manual CI on unprotected branches that are not MRs
- if: $CI_PIPELINE_SOURCE != "merge_request_event" && $CI_COMMIT_REF_PROTECTED == "false"
when: manual
# Manual CI on forks
- if: $IS_UPSTREAM_CI != "true"
when: manual
- if: $CI
interruptible: true
artifacts:
stage: artifacts
image: nixos/nix:2.22.0
script:
- ./bin/nix-build-and-cache .#static-x86_64-unknown-linux-musl
- cp result/bin/conduit x86_64-unknown-linux-musl
- mkdir -p target/release
- cp result/bin/conduit target/release
- direnv exec . cargo deb --no-build
- mv target/debian/*.deb x86_64-unknown-linux-musl.deb
# Since the OCI image package is based on the binary package, this has the
# fun side effect of uploading the normal binary too. Conduit users who are
# deploying with Nix can leverage this fact by adding our binary cache to
# their systems.
#
# Note that although we have an `oci-image-x86_64-unknown-linux-musl`
# output, we don't build it because it would be largely redundant to this
# one since it's all containerized anyway.
- ./bin/nix-build-and-cache .#oci-image
- cp result oci-image-amd64.tar.gz
- ./bin/nix-build-and-cache .#static-aarch64-unknown-linux-musl
- cp result/bin/conduit aarch64-unknown-linux-musl
- mkdir -p target/aarch64-unknown-linux-musl/release
- cp result/bin/conduit target/aarch64-unknown-linux-musl/release
- direnv exec . cargo deb --no-strip --no-build --target aarch64-unknown-linux-musl
- mv target/aarch64-unknown-linux-musl/debian/*.deb aarch64-unknown-linux-musl.deb
- ./bin/nix-build-and-cache .#oci-image-aarch64-unknown-linux-musl
- cp result oci-image-arm64v8.tar.gz
- ./bin/nix-build-and-cache .#book
# We can't just copy the symlink, we need to dereference it https://gitlab.com/gitlab-org/gitlab/-/issues/19746
- cp -r --dereference result public
artifacts:
paths:
- x86_64-unknown-linux-musl
- aarch64-unknown-linux-musl
- x86_64-unknown-linux-musl.deb
- aarch64-unknown-linux-musl.deb
- oci-image-amd64.tar.gz
- oci-image-arm64v8.tar.gz
- public
rules:
# CI required for all MRs
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
# Optional CI on forks
- if: $IS_UPSTREAM_CI != "true"
when: manual
allow_failure: true
- if: $CI
interruptible: true
.push-oci-image:
stage: publish
image: docker:25.0.0
services:
- docker:25.0.0-dind
variables:
IMAGE_SUFFIX_AMD64: amd64
IMAGE_SUFFIX_ARM64V8: arm64v8
script:
- docker load -i oci-image-amd64.tar.gz
- IMAGE_ID_AMD64=$(docker images -q conduit:next)
- docker load -i oci-image-arm64v8.tar.gz
- IMAGE_ID_ARM64V8=$(docker images -q conduit:next)
# Tag and push the architecture specific images
- docker tag $IMAGE_ID_AMD64 $IMAGE_NAME:$CI_COMMIT_SHA-$IMAGE_SUFFIX_AMD64
- docker tag $IMAGE_ID_ARM64V8 $IMAGE_NAME:$CI_COMMIT_SHA-$IMAGE_SUFFIX_ARM64V8
- docker push $IMAGE_NAME:$CI_COMMIT_SHA-$IMAGE_SUFFIX_AMD64
- docker push $IMAGE_NAME:$CI_COMMIT_SHA-$IMAGE_SUFFIX_ARM64V8
# Tag the multi-arch image
- docker manifest create $IMAGE_NAME:$CI_COMMIT_SHA --amend $IMAGE_NAME:$CI_COMMIT_SHA-$IMAGE_SUFFIX_AMD64 --amend $IMAGE_NAME:$CI_COMMIT_SHA-$IMAGE_SUFFIX_ARM64V8
- docker manifest push $IMAGE_NAME:$CI_COMMIT_SHA
# Tag and push the git ref
- docker manifest create $IMAGE_NAME:$CI_COMMIT_REF_NAME --amend $IMAGE_NAME:$CI_COMMIT_SHA-$IMAGE_SUFFIX_AMD64 --amend $IMAGE_NAME:$CI_COMMIT_SHA-$IMAGE_SUFFIX_ARM64V8
- docker manifest push $IMAGE_NAME:$CI_COMMIT_REF_NAME
# Tag git tags as 'latest'
- |
if [[ -n "$CI_COMMIT_TAG" ]]; then
docker manifest create $IMAGE_NAME:latest --amend $IMAGE_NAME:$CI_COMMIT_SHA-$IMAGE_SUFFIX_AMD64 --amend $IMAGE_NAME:$CI_COMMIT_SHA-$IMAGE_SUFFIX_ARM64V8
docker manifest push $IMAGE_NAME:latest
fi
dependencies:
- artifacts
only:
- next
- master
- tags
oci-image:push-gitlab:
extends: .push-oci-image
variables:
IMAGE_NAME: $CI_REGISTRY_IMAGE/matrix-conduit
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
oci-image:push-dockerhub:
extends: .push-oci-image
variables:
IMAGE_NAME: matrixconduit/matrix-conduit
before_script:
- docker login -u $DOCKER_HUB_USER -p $DOCKER_HUB_PASSWORD
pages:
stage: publish
dependencies:
- artifacts
only:
- next
script:
- "true"
artifacts:
paths:
- public

5
.gitlab/CODEOWNERS Normal file
View file

@ -0,0 +1,5 @@
# Nix things
.envrc @CobaltCause
flake.lock @CobaltCause
flake.nix @CobaltCause
nix/ @CobaltCause

3
.gitlab/route-map.yml Normal file
View file

@ -0,0 +1,3 @@
# Docs: Map markdown to html files
- source: /docs/(.+)\.md/
public: '\1.html'

View file

@ -0,0 +1,37 @@
#!/bin/sh
set -eux
# --------------------------------------------------------------------- #
# #
# Configures docker buildx to use a remote server for arm building. #
# Expects $SSH_PRIVATE_KEY to be a valid ssh ed25519 private key with #
# access to the server $ARM_SERVER_USER@$ARM_SERVER_IP #
# #
# This is expected to only be used in the official CI/CD pipeline! #
# #
# Requirements: openssh-client, docker buildx #
# Inspired by: https://depot.dev/blog/building-arm-containers #
# #
# --------------------------------------------------------------------- #
cat "$BUILD_SERVER_SSH_PRIVATE_KEY" | ssh-add -
# Test server connections:
ssh "$ARM_SERVER_USER@$ARM_SERVER_IP" "uname -a"
ssh "$AMD_SERVER_USER@$AMD_SERVER_IP" "uname -a"
# Connect remote arm64 server for all arm builds:
docker buildx create \
--name "multi" \
--driver "docker-container" \
--platform "linux/arm64,linux/arm/v7" \
"ssh://$ARM_SERVER_USER@$ARM_SERVER_IP"
# Connect remote amd64 server for adm64 builds:
docker buildx create --append \
--name "multi" \
--driver "docker-container" \
--platform "linux/amd64" \
"ssh://$AMD_SERVER_USER@$AMD_SERVER_IP"
docker buildx use multi

11
.vscode/extensions.json vendored Normal file
View file

@ -0,0 +1,11 @@
{
"recommendations": [
"rust-lang.rust-analyzer",
"bungcip.better-toml",
"ms-azuretools.vscode-docker",
"eamodio.gitlens",
"serayuzgur.crates",
"vadimcn.vscode-lldb",
"timonwong.shellcheck"
]
}

35
.vscode/launch.json vendored Normal file
View file

@ -0,0 +1,35 @@
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"type": "lldb",
"request": "launch",
"name": "Debug conduit",
"sourceLanguages": ["rust"],
"cargo": {
"args": [
"build",
"--bin=conduit",
"--package=conduit"
],
"filter": {
"name": "conduit",
"kind": "bin"
}
},
"args": [],
"env": {
"RUST_BACKTRACE": "1",
"CONDUIT_CONFIG": "",
"CONDUIT_SERVER_NAME": "localhost",
"CONDUIT_DATABASE_PATH": "/tmp",
"CONDUIT_ADDRESS": "0.0.0.0",
"CONDUIT_PORT": "6167"
},
"cwd": "${workspaceFolder}"
}
]
}

View file

@ -1,3 +0,0 @@
{
"rust-analyzer.procMacro.enable": true
}

View file

@ -1,86 +0,0 @@
# Setting up Appservices
## Getting help
If you run into any problems while setting up an Appservice, write an email to `timo@koesters.xyz`, ask us in `#conduit:matrix.org` or [open an issue on GitLab](https://gitlab.com/famedly/conduit/-/issues/new).
## Set up the appservice - general instructions
Follow whatever instructions are given by the appservice. This usually includes
downloading, changing its config (setting domain, homeserver url, port etc.)
and later starting it.
At some point the appservice guide should ask you to add a registration yaml
file to the homeserver. In Synapse you would do this by adding the path to the
homeserver.yaml, but in Conduit you can do this from within Matrix:
First, go into the #admins room of your homeserver. The first person that
registered on the homeserver automatically joins it. Then send a message into
the room like this:
@conduit:your.server.name: register_appservice
```
paste
the
contents
of
the
yaml
registration
here
```
You can confirm it worked by sending a message like this:
`@conduit:your.server.name: list_appservices`
The @conduit bot should answer with `Appservices (1): your-bridge`
Then you are done. Conduit will send messages to the appservices and the
appservice can send requests to the homeserver. You don't need to restart
Conduit, but if it doesn't work, restarting while the appservice is running
could help.
## Appservice-specific instructions
### Tested appservices
These appservices have been tested and work with Conduit without any extra steps:
- [matrix-appservice-discord](https://github.com/Half-Shot/matrix-appservice-discord)
- [mautrix-hangouts](https://github.com/mautrix/hangouts/)
- [mautrix-telegram](https://github.com/mautrix/telegram/)
### [mautrix-signal](https://github.com/mautrix/signal)
There are a few things you need to do, in order for the Signal bridge (at least
up to version `0.2.0`) to work. How you do this depends on whether you use
Docker or `virtualenv` to run it. In either case you need to modify
[portal.py](https://github.com/mautrix/signal/blob/master/mautrix_signal/portal.py).
Do this **before** following the bridge installation guide.
1. **Create a copy of `portal.py`**. Go to
[portal.py](https://github.com/mautrix/signal/blob/master/mautrix_signal/portal.py)
at [mautrix-signal](https://github.com/mautrix/signal) (make sure you change to
the correct commit/version of mautrix-signal you're using) and copy its
content. Create a new `portal.py` on your system and paste the content in.
2. **Patch the copy**. Exact line numbers may be slightly different, look nearby if they don't match:
- [Line 1020](https://github.com/mautrix/signal/blob/4ea831536f154aba6419d13292479eb383ea3308/mautrix_signal/portal.py#L1020)
```diff
--- levels.users[self.main_intent.mxid] = 9001 if is_initial else 100
+++ levels.users[self.main_intent.mxid] = 100 if is_initial else 100
```
- [Between lines 1041 and 1042](https://github.com/mautrix/signal/blob/4ea831536f154aba6419d13292479eb383ea3308/mautrix_signal/portal.py#L1041-L1042) add a new line:
```diff
"type": str(EventType.ROOM_POWER_LEVELS),
+++ "state_key": "",
"content": power_levels.serialize(),
```
3. **Deploy the patch**. This is different depending on how you have `mautrix-signal` deployed:
- [*If using virtualenv*] Copy your patched `portal.py` to `./lib/python3.7/site-packages/mautrix_signal/portal.py` (the exact version of Python may be different on your system).
- [*If using Docker*] Map the patched `portal.py` into the `mautrix-signal` container:
```yaml
volumes:
- ./your/path/on/host/portal.py:/usr/lib/python3.9/site-packages/mautrix_signal/portal.py
```
4. Now continue with the [bridge installation instructions ](https://docs.mau.fi/bridges/index.html) and the general bridge notes above.

134
CODE_OF_CONDUCT.md Normal file
View file

@ -0,0 +1,134 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, caste, color, religion, or sexual
identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the overall
community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or advances of
any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email address,
without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement over email at
coc@koesters.xyz or over Matrix at @timo:conduit.rs.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of
actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or permanent
ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the
community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.1, available at
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
Community Impact Guidelines were inspired by
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
For answers to common questions about this code of conduct, see the FAQ at
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
[https://www.contributor-covenant.org/translations][translations].
[homepage]: https://www.contributor-covenant.org
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations

View file

@ -1,11 +0,0 @@
Install docker:
```
$ sudo apt install docker
$ sudo usermod -aG docker $USER
$ exec sudo su -l $USER
$ sudo systemctl start docker
$ cargo install cross
$ cross build --release --target armv7-unknown-linux-musleabihf
```
The cross-compiled binary is at target/armv7-unknown-linux-musleabihf/release/conduit

3737
Cargo.lock generated

File diff suppressed because it is too large Load diff

View file

@ -1,98 +1,191 @@
[workspace.lints.rust]
explicit_outlives_requirements = "warn"
unused_qualifications = "warn"
[workspace.lints.clippy]
cloned_instead_of_copied = "warn"
dbg_macro = "warn"
str_to_string = "warn"
[package]
name = "conduit"
description = "A Matrix homeserver written in Rust"
license = "Apache-2.0"
authors = ["timokoesters <timo@koesters.xyz>"]
description = "A Matrix homeserver written in Rust"
edition = "2021"
homepage = "https://conduit.rs"
repository = "https://gitlab.com/famedly/conduit"
license = "Apache-2.0"
name = "conduit"
readme = "README.md"
version = "0.2.0"
edition = "2018"
repository = "https://gitlab.com/famedly/conduit"
version = "0.10.0-alpha"
# See also `rust-toolchain.toml`
rust-version = "1.79.0"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[lints]
workspace = true
[dependencies]
# Used to handle requests
# TODO: This can become optional as soon as proper configs are supported
# rocket = { git = "https://github.com/SergioBenitez/Rocket.git", rev = "801e04bd5369eb39e126c75f6d11e1e9597304d8", features = ["tls"] } # Used to handle requests
rocket = { version = "0.5.0-rc.1", features = ["tls"] } # Used to handle requests
# Web framework
axum = { version = "0.7", default-features = false, features = [
"form",
"http1",
"http2",
"json",
"matched-path",
], optional = true }
axum-extra = { version = "0.9", features = ["typed-header"] }
axum-server = { version = "0.6", features = ["tls-rustls"] }
tower = { version = "0.4.13", features = ["util"] }
tower-http = { version = "0.5", features = [
"add-extension",
"cors",
"sensitive-headers",
"trace",
"util",
] }
tower-service = "0.3"
# Used for matrix spec type definitions and helpers
#ruma = { version = "0.4.0", features = ["compat", "rand", "appservice-api-c", "client-api", "federation-api", "push-gateway-api-c", "state-res", "unstable-pre-spec", "unstable-exhaustive-types"] }
ruma = { git = "https://github.com/ruma/ruma", rev = "e7f01ca55a1eff437bad754bf0554cc09f44ec2a", features = ["compat", "rand", "appservice-api-c", "client-api", "federation-api", "push-gateway-api-c", "state-res", "unstable-pre-spec", "unstable-exhaustive-types"] }
#ruma = { git = "https://github.com/timokoesters/ruma", rev = "50c1db7e0a3a21fc794b0cce3b64285a4c750c71", features = ["compat", "rand", "appservice-api-c", "client-api", "federation-api", "push-gateway-api-c", "state-res", "unstable-pre-spec", "unstable-exhaustive-types"] }
#ruma = { path = "../ruma/crates/ruma", features = ["compat", "rand", "appservice-api-c", "client-api", "federation-api", "push-gateway-api-c", "state-res", "unstable-pre-spec", "unstable-exhaustive-types"] }
# Used for long polling and federation sender, should be the same as rocket::tokio
tokio = "1.11.0"
# Async runtime and utilities
tokio = { version = "1.28.1", features = ["fs", "macros", "signal", "sync"] }
# Used for storing data permanently
sled = { version = "0.34.6", features = ["compression", "no_metrics"], optional = true }
#sled = { version = "0.34.7", features = ["compression", "no_metrics"], optional = true }
#sled = { git = "https://github.com/spacejam/sled.git", rev = "e4640e0773595229f398438886f19bca6f7326a2", features = ["compression"] }
persy = { version = "1.4.4", optional = true, features = ["background_ops"] }
# Used for the http request / response body type for Ruma endpoints used with reqwest
bytes = "1.1.0"
# Used for rocket<->ruma conversions
http = "0.2.4"
bytes = "1.4.0"
http = "1"
# Used to find data directory for default db path
directories = "3.0.2"
directories = "5"
# Used for ruma wrapper
serde_json = { version = "1.0.67", features = ["raw_value"] }
serde_json = { version = "1.0.96", features = ["raw_value"] }
# Used for appservice registration files
serde_yaml = "0.8.20"
serde_yaml = "0.9.21"
# Used for pdu definition
serde = "1.0.130"
serde = { version = "1.0.163", features = ["rc"] }
# Used for secure identifiers
rand = "0.8.4"
rand = "0.8.5"
# Used to hash passwords
rust-argon2 = "0.8.3"
rust-argon2 = "2"
# Used to send requests
reqwest = { version = "0.11.4", default-features = false, features = ["rustls-tls-native-roots", "socks"] }
# Custom TLS verifier
rustls = { version = "0.19.1", features = ["dangerous_configuration"] }
rustls-native-certs = "0.5.0"
webpki = "0.22.0"
hyper = "1.1"
hyper-util = { version = "0.1", features = [
"client",
"client-legacy",
"http1",
"http2",
] }
reqwest = { version = "0.12", default-features = false, features = [
"rustls-tls-native-roots",
"socks",
] }
# Used for conduit::Error type
thiserror = "1.0.28"
thiserror = "1.0.40"
# Used to generate thumbnails for images
image = { version = "0.23.14", default-features = false, features = ["jpeg", "png", "gif"] }
image = { version = "0.25", default-features = false, features = [
"gif",
"jpeg",
"png",
] }
# Used to encode server public key
base64 = "0.13.0"
base64 = "0.22"
# Used when hashing the state
ring = "0.16.20"
ring = "0.17.7"
# Used when querying the SRV record of other servers
trust-dns-resolver = "0.20.3"
hickory-resolver = "0.24"
# Used to find matching events for appservices
regex = "1.5.4"
regex = "1.8.1"
# jwt jsonwebtokens
jsonwebtoken = "7.2.0"
jsonwebtoken = "9.2.0"
# Performance measurements
tracing = { version = "0.1.26", features = ["release_max_level_warn"] }
tracing-subscriber = "0.2.20"
tracing-flame = "0.1.0"
opentelemetry = { version = "0.16.0", features = ["rt-tokio"] }
opentelemetry-jaeger = { version = "0.15.0", features = ["rt-tokio"] }
lru-cache = "0.1.2"
rusqlite = { version = "0.25.3", optional = true, features = ["bundled"] }
parking_lot = { version = "0.11.2", optional = true }
crossbeam = { version = "0.8.1", optional = true }
num_cpus = "1.13.0"
threadpool = "1.8.1"
heed = { git = "https://github.com/timokoesters/heed.git", rev = "f6f825da7fb2c758867e05ad973ef800a6fe1d5d", optional = true }
thread_local = "1.1.3"
# used for TURN server authentication
hmac = "0.11.0"
sha-1 = "0.9.8"
# used to embed Element Web into Conduit
rust-embed="6.3.0"
opentelemetry = "0.22"
opentelemetry-jaeger-propagator = "0.1"
opentelemetry-otlp = "0.15"
opentelemetry_sdk = { version = "0.22", features = ["rt-tokio"] }
tracing = "0.1.37"
tracing-flame = "0.2.0"
tracing-opentelemetry = "0.23"
tracing-subscriber = { version = "0.3.17", features = ["env-filter"] }
lru-cache = "0.1.2"
parking_lot = { version = "0.12.1", optional = true }
rusqlite = { version = "0.31", optional = true, features = ["bundled"] }
# crossbeam = { version = "0.8.2", optional = true }
num_cpus = "1.15.0"
threadpool = "1.8.1"
# heed = { git = "https://github.com/timokoesters/heed.git", rev = "f6f825da7fb2c758867e05ad973ef800a6fe1d5d", optional = true }
# Used for ruma wrapper
serde_html_form = "0.2.0"
thread_local = "1.1.7"
# used for TURN server authentication
hmac = "0.12.1"
sha-1 = "0.10.1"
# used for conduit's CLI and admin room command parsing
clap = { version = "4.3.0", default-features = false, features = [
"derive",
"error-context",
"help",
"std",
"string",
"usage",
] }
futures-util = { version = "0.3.28", default-features = false }
# Used for reading the configuration from conduit.toml & environment variables
figment = { version = "0.10.8", features = ["env", "toml"] }
# Validating urls in config
url = { version = "2", features = ["serde"] }
async-trait = "0.1.68"
tikv-jemallocator = { version = "0.5.0", features = [
"unprefixed_malloc_on_supported_platforms",
], optional = true }
sd-notify = { version = "0.4.1", optional = true }
# Used for matrix spec type definitions and helpers
[dependencies.ruma]
features = [
"appservice-api-c",
"client-api",
"compat",
"federation-api",
"push-gateway-api-c",
"rand",
"ring-compat",
"server-util",
"state-res",
"unstable-exhaustive-types",
"unstable-msc2448",
"unstable-msc3575",
"unstable-unspecified",
]
git = "https://github.com/ruma/ruma"
[dependencies.rocksdb]
features = ["lz4", "multi-threaded-cf", "zstd"]
optional = true
package = "rust-rocksdb"
version = "0.25"
[target.'cfg(unix)'.dependencies]
nix = { version = "0.28", features = ["resource"] }
[features]
default = ["conduit_bin", "backend_sqlite"]
backend_sled = ["sled"]
default = ["backend_rocksdb", "backend_sqlite", "conduit_bin", "systemd"]
#backend_sled = ["sled"]
backend_persy = ["parking_lot", "persy"]
backend_sqlite = ["sqlite"]
backend_heed = ["heed", "crossbeam"]
sqlite = ["rusqlite", "parking_lot", "crossbeam", "tokio/signal"]
conduit_bin = [] # TODO: add rocket to this when it is optional
#backend_heed = ["heed", "crossbeam"]
backend_rocksdb = ["rocksdb"]
conduit_bin = ["axum"]
jemalloc = ["tikv-jemallocator"]
sqlite = ["parking_lot", "rusqlite", "tokio/signal"]
systemd = ["sd-notify"]
[[bin]]
name = "conduit"
@ -104,35 +197,45 @@ name = "conduit"
path = "src/lib.rs"
[package.metadata.deb]
name = "matrix-conduit"
maintainer = "Paul van Tilburg <paul@luon.net>"
assets = [
[
"README.md",
"usr/share/doc/matrix-conduit/",
"644",
],
[
"debian/README.md",
"usr/share/doc/matrix-conduit/README.Debian",
"644",
],
[
"target/release/conduit",
"usr/sbin/matrix-conduit",
"755",
],
]
conf-files = ["/etc/matrix-conduit/conduit.toml"]
copyright = "2020, Timo Kösters <timo@koesters.xyz>"
license-file = ["LICENSE", "3"]
depends = "$auto, ca-certificates"
extended-description = """\
A fast Matrix homeserver that is optimized for smaller, personal servers, \
instead of a server that has high scalability."""
section = "net"
priority = "optional"
assets = [
["debian/README.Debian", "usr/share/doc/matrix-conduit/", "644"],
["README.md", "usr/share/doc/matrix-conduit/", "644"],
["target/release/conduit", "usr/sbin/matrix-conduit", "755"],
]
conf-files = [
"/etc/matrix-conduit/conduit.toml"
]
license-file = ["LICENSE", "3"]
maintainer = "Paul van Tilburg <paul@luon.net>"
maintainer-scripts = "debian/"
name = "matrix-conduit"
priority = "optional"
section = "net"
systemd-units = { unit-name = "matrix-conduit" }
[profile.dev]
lto = 'off'
incremental = true
lto = 'off'
[profile.release]
lto = 'thin'
codegen-units = 32
incremental = true
codegen-units=32
lto = 'thin'
# If you want to make flamegraphs, enable debug info:
# debug = true

239
DEPLOY.md
View file

@ -1,239 +0,0 @@
# Deploying Conduit
## Getting help
If you run into any problems while setting up Conduit, write an email to `timo@koesters.xyz`, ask us
in `#conduit:matrix.org` or [open an issue on GitLab](https://gitlab.com/famedly/conduit/-/issues/new).
## Installing Conduit
Although you might be able to compile Conduit for Windows, we do recommend running it on a linux server. We therefore
only offer Linux binaries.
You may simply download the binary that fits your machine. Run `uname -m` to see what you need. Now copy the right url:
| CPU Architecture | Download stable version |
| ------------------------------------------- | ------------------------------ |
| x84_64 / amd64 (Most servers and computers) | [Download][x84_64-musl-master] |
| armv6 | [Download][armv6-musl-master] |
| armv7 (e.g. Raspberry Pi by default) | [Download][armv7-musl-master] |
| armv8 / aarch64 | [Download][armv8-musl-master] |
[x84_64-musl-master]: https://gitlab.com/famedly/conduit/-/jobs/artifacts/master/raw/conduit-x86_64-unknown-linux-musl?job=build:release:cargo:x86_64-unknown-linux-musl
[armv6-musl-master]: https://gitlab.com/famedly/conduit/-/jobs/artifacts/master/raw/conduit-arm-unknown-linux-musleabihf?job=build:release:cargo:arm-unknown-linux-musleabihf
[armv7-musl-master]: https://gitlab.com/famedly/conduit/-/jobs/artifacts/master/raw/conduit-armv7-unknown-linux-musleabihf?job=build:release:cargo:armv7-unknown-linux-musleabihf
[armv8-musl-master]: https://gitlab.com/famedly/conduit/-/jobs/artifacts/master/raw/conduit-aarch64-unknown-linux-musl?job=build:release:cargo:aarch64-unknown-linux-musl
```bash
$ sudo wget -O /usr/local/bin/matrix-conduit <url>
$ sudo chmod +x /usr/local/bin/matrix-conduit
```
Alternatively, you may compile the binary yourself using
```bash
$ cargo build --release
```
Note that this currently requires Rust 1.50.
If you want to cross compile Conduit to another architecture, read the [Cross-Compile Guide](CROSS_COMPILE.md).
## Adding a Conduit user
While Conduit can run as any user it is usually better to use dedicated users for different services. This also allows
you to make sure that the file permissions are correctly set up.
In Debian you can use this command to create a Conduit user:
```bash
sudo adduser --system conduit --no-create-home
```
## Setting up a systemd service
Now we'll set up a systemd service for Conduit, so it's easy to start/stop Conduit and set it to autostart when your
server reboots. Simply paste the default systemd service you can find below into
`/etc/systemd/system/conduit.service`.
```systemd
[Unit]
Description=Conduit Matrix Server
After=network.target
[Service]
Environment="CONDUIT_CONFIG=/etc/matrix-conduit/conduit.toml"
User=conduit
Group=nogroup
Restart=always
ExecStart=/usr/local/bin/matrix-conduit
[Install]
WantedBy=multi-user.target
```
Finally, run
```bash
$ sudo systemctl daemon-reload
```
## Creating the Conduit configuration file
Now we need to create the Conduit's config file in `/etc/matrix-conduit/conduit.toml`. Paste this in **and take a moment
to read it. You need to change at least the server name.**
```toml
[global]
# The server_name is the name of this server. It is used as a suffix for user
# and room ids. Examples: matrix.org, conduit.rs
# The Conduit server needs to be reachable at https://your.server.name/ on port
# 443 (client-server) and 8448 (federation) OR you can create /.well-known
# files to redirect requests. See
# https://matrix.org/docs/spec/client_server/latest#get-well-known-matrix-client
# and https://matrix.org/docs/spec/server_server/r0.1.4#get-well-known-matrix-server
# for more information
# YOU NEED TO EDIT THIS
#server_name = "your.server.name"
# This is the only directory where Conduit will save its data
database_path = "/var/lib/matrix-conduit/conduit_db"
# The port Conduit will be running on. You need to set up a reverse proxy in
# your web server (e.g. apache or nginx), so all requests to /_matrix on port
# 443 and 8448 will be forwarded to the Conduit instance running on this port
port = 6167
# Max size for uploads
max_request_size = 20_000_000 # in bytes
# Enables registration. If set to false, no users can register on this server.
allow_registration = true
# Disable encryption, so no new encrypted rooms can be created
# Note: existing rooms will continue to work
allow_encryption = true
allow_federation = true
trusted_servers = ["matrix.org"]
#max_concurrent_requests = 100 # How many requests Conduit sends to other servers at the same time
#workers = 4 # default: cpu core count * 2
address = "127.0.0.1" # This makes sure Conduit can only be reached using the reverse proxy
# The total amount of memory that the database will use.
#db_cache_capacity_mb = 200
```
## Setting the correct file permissions
As we are using a Conduit specific user we need to allow it to read the config. To do that you can run this command on
Debian:
```bash
sudo chown -R conduit:nogroup /etc/matrix-conduit
```
If you use the default database path you also need to run this:
```bash
sudo mkdir -p /var/lib/matrix-conduit/conduit_db
sudo chown -R conduit:nogroup /var/lib/matrix-conduit/conduit_db
```
## Setting up the Reverse Proxy
This depends on whether you use Apache, Nginx or another web server.
### Apache
Create `/etc/apache2/sites-enabled/050-conduit.conf` and copy-and-paste this:
```apache
Listen 8448
<VirtualHost *:443 *:8448>
ServerName your.server.name # EDIT THIS
AllowEncodedSlashes NoDecode
ProxyPass /_matrix/ http://127.0.0.1:6167/_matrix/ nocanon
ProxyPassReverse /_matrix/ http://127.0.0.1:6167/_matrix/
</VirtualHost>
```
**You need to make some edits again.** When you are done, run
```bash
$ sudo systemctl reload apache2
```
### Nginx
If you use Nginx and not Apache, add the following server section inside the http section of `/etc/nginx/nginx.conf`
```nginx
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
listen 8448 ssl http2;
listen [::]:8448 ssl http2;
server_name your.server.name; # EDIT THIS
merge_slashes off;
location /_matrix/ {
proxy_pass http://127.0.0.1:6167$request_uri;
proxy_set_header Host $http_host;
proxy_buffering off;
}
ssl_certificate /etc/letsencrypt/live/your.server.name/fullchain.pem; # EDIT THIS
ssl_certificate_key /etc/letsencrypt/live/your.server.name/privkey.pem; # EDIT THIS
ssl_trusted_certificate /etc/letsencrypt/live/your.server.name/chain.pem; # EDIT THIS
include /etc/letsencrypt/options-ssl-nginx.conf;
}
```
**You need to make some edits again.** When you are done, run
```bash
$ sudo systemctl reload nginx
```
## SSL Certificate
The easiest way to get an SSL certificate, if you don't have one already, is to install `certbot` and run this:
```bash
$ sudo certbot -d your.server.name
```
## You're done!
Now you can start Conduit with:
```bash
$ sudo systemctl start conduit
```
Set it to start automatically when your system boots with:
```bash
$ sudo systemctl enable conduit
```
## How do I know it works?
You can open <https://app.element.io>, enter your homeserver and try to register.
You can also use these commands as a quick health check.
```bash
$ curl https://your.server.name/_matrix/client/versions
$ curl https://your.server.name:8448/_matrix/client/versions
```
If you want to set up an appservice, take a look at the [Appservice Guide](APPSERVICES.md).

View file

@ -1,82 +0,0 @@
# syntax=docker/dockerfile:1
FROM docker.io/rust:1.53-alpine AS builder
WORKDIR /usr/src/conduit
# Install required packages to build Conduit and it's dependencies
RUN apk add musl-dev
# == Build dependencies without our own code separately for caching ==
#
# Need a fake main.rs since Cargo refuses to build anything otherwise.
#
# See https://github.com/rust-lang/cargo/issues/2644 for a Cargo feature
# request that would allow just dependencies to be compiled, presumably
# regardless of whether source files are available.
RUN mkdir src && touch src/lib.rs && echo 'fn main() {}' > src/main.rs
COPY Cargo.toml Cargo.lock ./
RUN cargo build --release && rm -r src
# Copy over actual Conduit sources
COPY src src
# main.rs and lib.rs need their timestamp updated for this to work correctly since
# otherwise the build with the fake main.rs from above is newer than the
# source files (COPY preserves timestamps).
#
# Builds conduit and places the binary at /usr/src/conduit/target/release/conduit
RUN touch src/main.rs && touch src/lib.rs && cargo build --release
# ---------------------------------------------------------------------------------------------------------------
# Stuff below this line actually ends up in the resulting docker image
# ---------------------------------------------------------------------------------------------------------------
FROM docker.io/alpine:3.14 AS runner
# Standard port on which Conduit launches.
# You still need to map the port when using the docker command or docker-compose.
EXPOSE 6167
# Note from @jfowl: I would like to remove this in the future and just have the Docker version be configured with envs.
ENV CONDUIT_CONFIG="/srv/conduit/conduit.toml"
# Conduit needs:
# ca-certificates: for https
# libgcc: Apparently this is needed, even if I (@jfowl) don't know exactly why. But whatever, it's not that big.
RUN apk add --no-cache \
ca-certificates \
curl \
libgcc
# Created directory for the database and media files
RUN mkdir -p /srv/conduit/.local/share/conduit
# Test if Conduit is still alive, uses the same endpoint as Element
COPY ./docker/healthcheck.sh /srv/conduit/healthcheck.sh
HEALTHCHECK --start-period=5s --interval=5s CMD ./healthcheck.sh
# Copy over the actual Conduit binary from the builder stage
COPY --from=builder /usr/src/conduit/target/release/conduit /srv/conduit/conduit
# Improve security: Don't run stuff as root, that does not need to run as root:
# Add www-data user and group with UID 82, as used by alpine
# https://git.alpinelinux.org/aports/tree/main/nginx/nginx.pre-install
RUN set -x ; \
addgroup -Sg 82 www-data 2>/dev/null ; \
adduser -S -D -H -h /srv/conduit -G www-data -g www-data www-data 2>/dev/null ; \
addgroup www-data www-data 2>/dev/null && exit 0 ; exit 1
# Change ownership of Conduit files to www-data user and group
RUN chown -cR www-data:www-data /srv/conduit
RUN chmod +x /srv/conduit/healthcheck.sh
# Change user to www-data
USER www-data
# Set container home directory
WORKDIR /srv/conduit
# Run Conduit and print backtraces on panics
ENV RUST_BACKTRACE=1
ENTRYPOINT [ "/srv/conduit/conduit" ]

View file

@ -1,5 +1,19 @@
# Conduit
<!-- ANCHOR: catchphrase -->
### A Matrix homeserver written in Rust
<!-- ANCHOR_END: catchphrase -->
Please visit the [Conduit documentation](https://famedly.gitlab.io/conduit) for more information.
Alternatively you can open [docs/introduction.md](docs/introduction.md) in this repository.
<!-- ANCHOR: body -->
#### What is Matrix?
[Matrix](https://matrix.org) is an open network for secure and decentralized
communication. Users from every Matrix homeserver can chat with users from all
other Matrix servers. You can even use bridges (also called Matrix appservices)
to communicate with users outside of Matrix, like a community on Discord.
#### What is the goal?
@ -7,67 +21,64 @@ An efficient Matrix homeserver that's easy to set up and just works. You can ins
it on a mini-computer like the Raspberry Pi to host Matrix for your family,
friends or company.
#### Can I try it out?
Yes! You can test our Conduit instance by opening a Matrix client (<https://app.element.io> or Element Android for
example) and registering on the `conduit.rs` homeserver.
It is hosted on a ODROID HC 2 with 2GB RAM and a SAMSUNG Exynos 5422 CPU, which
was used in the Samsung Galaxy S5. It joined many big rooms including Matrix
HQ.
Yes! You can test our Conduit instance by opening a client that supports registration tokens such as [Element web](https://app.element.io/), [Nheko](https://matrix.org/ecosystem/clients/nheko/) or [SchildiChat web](https://app.schildi.chat/) and registering on the `conduit.rs` homeserver. The registration token is "for_testing_only". Don't share personal information. Once you have registered, you can use any other [Matrix client](https://matrix.org/ecosystem/clients) to login.
Server hosting for conduit.rs is donated by the Matrix.org Foundation.
#### What is the current status?
As of 2021-09-01, Conduit is Beta, meaning you can join and participate in most
Conduit is Beta, meaning you can join and participate in most
Matrix rooms, but not all features are supported and you might run into bugs
from time to time.
There are still a few important features missing:
- E2EE verification over federation
- Outgoing read receipts, typing, presence over federation
Check out the [Conduit 1.0 Release Milestone](https://gitlab.com/famedly/conduit/-/milestones/3).
#### How can I deploy my own?
Simple install (this was tested the most): [DEPLOY.md](DEPLOY.md)\
Debian package: [debian/README.Debian](debian/README.Debian)\
Docker: [docker/README.md](docker/README.md)
If you want to connect an Appservice to Conduit, take a look at [APPSERVICES.md](APPSERVICES.md).
- E2EE emoji comparison over federation (E2EE chat works)
- Outgoing read receipts, typing, presence over federation (incoming works)
<!-- ANCHOR_END: body -->
<!-- ANCHOR: footer -->
#### How can I contribute?
1. Look for an issue you would like to work on and make sure it's not assigned
to other users
2. Ask someone to assign the issue to you (comment on the issue or chat in
#conduit:nordgedanken.dev)
3. Fork the repo and work on the issue. #conduit:nordgedanken.dev is happy to help :)
1. Look for an issue you would like to work on and make sure no one else is currently working on it.
2. Tell us that you are working on the issue (comment on the issue or chat in
[#conduit:fachschaften.org](https://matrix.to/#/#conduit:fachschaften.org)). If it is more complicated, please explain your approach and ask questions.
3. Fork the repo, create a new branch and push commits.
4. Submit a MR
#### Contact
If you have any questions, feel free to
- Ask in `#conduit:fachschaften.org` on Matrix
- Write an E-Mail to `conduit@koesters.xyz`
- Send an direct message to `@timokoesters:fachschaften.org` on Matrix
- [Open an issue on GitLab](https://gitlab.com/famedly/conduit/-/issues/new)
#### Security
If you believe you have found a security issue, please send a message to [Timo](https://matrix.to/#/@timo:conduit.rs)
and/or [Matthias](https://matrix.to/#/@matthias:ahouansou.cz) on Matrix, or send an email to
[conduit@koesters.xyz](mailto:conduit@koesters.xyz). Please do not disclose details about the issue to anyone else before
a fix is released publically.
#### Thanks to
Thanks to Famedly, Prototype Fund (DLR and German BMBF) and all other individuals for financially supporting this project.
Thanks to FUTO, Famedly, Prototype Fund (DLR and German BMBF) and all individuals for financially supporting this project.
Thanks to the contributors to Conduit and all libraries we use, for example:
- Ruma: A clean library for the Matrix Spec in Rust
- Rocket: A flexible web framework
- axum: A modular web framework
#### Donate
Liberapay: <https://liberapay.com/timokoesters/>\
Bitcoin: `bc1qnnykf986tw49ur7wx9rpw2tevpsztvar5x8w4n`
- Liberapay: <https://liberapay.com/timokoesters/>
- Bitcoin: `bc1qnnykf986tw49ur7wx9rpw2tevpsztvar5x8w4n`
#### Logo
Lightning Bolt Logo: https://github.com/mozilla/fxemoji/blob/gh-pages/svgs/nature/u26A1-bolt.svg \
Logo License: https://github.com/mozilla/fxemoji/blob/gh-pages/LICENSE.md
- Lightning Bolt Logo: <https://github.com/mozilla/fxemoji/blob/gh-pages/svgs/nature/u26A1-bolt.svg>
- Logo License: <https://github.com/mozilla/fxemoji/blob/gh-pages/LICENSE.md>
<!-- ANCHOR_END: footer -->

37
bin/complement Executable file
View file

@ -0,0 +1,37 @@
#!/usr/bin/env bash
set -euo pipefail
# Path to Complement's source code
COMPLEMENT_SRC="$1"
# A `.jsonl` file to write test logs to
LOG_FILE="$2"
# A `.jsonl` file to write test results to
RESULTS_FILE="$3"
OCI_IMAGE="complement-conduit:dev"
env \
-C "$(git rev-parse --show-toplevel)" \
docker build \
--tag "$OCI_IMAGE" \
--file complement/Dockerfile \
.
# It's okay (likely, even) that `go test` exits nonzero
set +o pipefail
env \
-C "$COMPLEMENT_SRC" \
COMPLEMENT_BASE_IMAGE="$OCI_IMAGE" \
go test -json ./tests | tee "$LOG_FILE"
set -o pipefail
# Post-process the results into an easy-to-compare format
cat "$LOG_FILE" | jq -c '
select(
(.Action == "pass" or .Action == "fail" or .Action == "skip")
and .Test != null
) | {Action: .Action, Test: .Test}
' | sort > "$RESULTS_FILE"

40
bin/nix-build-and-cache Executable file
View file

@ -0,0 +1,40 @@
#!/usr/bin/env bash
set -euo pipefail
# Build the installable and forward any other arguments too. Also, use
# nix-output-monitor instead if it's available.
if command -v nom &> /dev/null; then
nom build "$@"
else
nix build "$@"
fi
if [ ! -z ${ATTIC_TOKEN+x} ]; then
nix run --inputs-from . attic -- \
login \
conduit \
"${ATTIC_ENDPOINT:-https://attic.conduit.rs/conduit}" \
"$ATTIC_TOKEN"
readarray -t derivations < <(nix path-info "$@" --derivation)
for derivation in "${derivations[@]}"; do
cache+=(
"$(nix-store --query --requisites --include-outputs "$derivation")"
)
done
# Upload them to Attic
#
# Use `xargs` and a here-string because something would probably explode if
# several thousand arguments got passed to a command at once. Hopefully no
# store paths include a newline in them.
(
IFS=$'\n'
nix shell --inputs-from . attic -c xargs \
attic push conduit <<< "${cache[*]}"
)
else
echo "\$ATTIC_TOKEN is unset, skipping uploading to the binary cache"
fi

21
book.toml Normal file
View file

@ -0,0 +1,21 @@
[book]
description = "Conduit is a simple, fast and reliable chat server for the Matrix protocol"
language = "en"
multilingual = false
src = "docs"
title = "Conduit"
[build]
build-dir = "public"
create-missing = true
[output.html]
edit-url-template = "https://gitlab.com/famedly/conduit/-/edit/next/{path}"
git-repository-icon = "fa-git-square"
git-repository-url = "https://gitlab.com/famedly/conduit"
[output.html.search]
limit-results = 15
[output.html.code.hidelines]
json = "~"

45
complement/Dockerfile Normal file
View file

@ -0,0 +1,45 @@
FROM rust:1.79.0
WORKDIR /workdir
RUN apt-get update && apt-get install -y --no-install-recommends \
libclang-dev
COPY Cargo.toml Cargo.toml
COPY Cargo.lock Cargo.lock
COPY src src
RUN cargo build --release \
&& mv target/release/conduit conduit \
&& rm -rf target
# Install caddy
RUN apt-get update \
&& apt-get install -y \
debian-keyring \
debian-archive-keyring \
apt-transport-https \
curl \
&& curl -1sLf 'https://dl.cloudsmith.io/public/caddy/testing/gpg.key' \
| gpg --dearmor -o /usr/share/keyrings/caddy-testing-archive-keyring.gpg \
&& curl -1sLf 'https://dl.cloudsmith.io/public/caddy/testing/debian.deb.txt' \
| tee /etc/apt/sources.list.d/caddy-testing.list \
&& apt-get update \
&& apt-get install -y caddy
COPY conduit-example.toml conduit.toml
COPY complement/caddy.json caddy.json
ENV SERVER_NAME=localhost
ENV CONDUIT_CONFIG=/workdir/conduit.toml
RUN sed -i "s/port = 6167/port = 8008/g" conduit.toml
RUN echo "log = \"warn,_=off,sled=off\"" >> conduit.toml
RUN sed -i "s/address = \"127.0.0.1\"/address = \"0.0.0.0\"/g" conduit.toml
EXPOSE 8008 8448
CMD uname -a && \
sed -i "s/#server_name = \"your.server.name\"/server_name = \"${SERVER_NAME}\"/g" conduit.toml && \
sed -i "s/your.server.name/${SERVER_NAME}/g" caddy.json && \
caddy start --config caddy.json > /dev/null && \
/workdir/conduit

11
complement/README.md Normal file
View file

@ -0,0 +1,11 @@
# Complement
## What's that?
Have a look at [its repository](https://github.com/matrix-org/complement).
## How do I use it with Conduit?
The script at [`../bin/complement`](../bin/complement) has automation for this.
It takes a few command line arguments, you can read the script to find out what
those are.

72
complement/caddy.json Normal file
View file

@ -0,0 +1,72 @@
{
"logging": {
"logs": {
"default": {
"level": "WARN"
}
}
},
"apps": {
"http": {
"https_port": 8448,
"servers": {
"srv0": {
"listen": [":8448"],
"routes": [{
"match": [{
"host": ["your.server.name"]
}],
"handle": [{
"handler": "subroute",
"routes": [{
"handle": [{
"handler": "reverse_proxy",
"upstreams": [{
"dial": "127.0.0.1:8008"
}]
}]
}]
}],
"terminal": true
}],
"tls_connection_policies": [{
"match": {
"sni": ["your.server.name"]
}
}]
}
}
},
"pki": {
"certificate_authorities": {
"local": {
"name": "Complement CA",
"root": {
"certificate": "/complement/ca/ca.crt",
"private_key": "/complement/ca/ca.key"
},
"intermediate": {
"certificate": "/complement/ca/ca.crt",
"private_key": "/complement/ca/ca.key"
}
}
}
},
"tls": {
"automation": {
"policies": [{
"subjects": ["your.server.name"],
"issuers": [{
"module": "internal"
}],
"on_demand": true
}, {
"issuers": [{
"module": "internal",
"ca": "local"
}]
}]
}
}
}
}

View file

@ -1,22 +1,35 @@
# =============================================================================
# This is the official example config for Conduit.
# If you use it for your server, you will need to adjust it to your own needs.
# At the very least, change the server_name field!
# =============================================================================
[global]
# The server_name is the name of this server. It is used as a suffix for user
# The server_name is the pretty name of this server. It is used as a suffix for user
# and room ids. Examples: matrix.org, conduit.rs
# The Conduit server needs to be reachable at https://your.server.name/ on port
# 443 (client-server) and 8448 (federation) OR you can create /.well-known
# files to redirect requests. See
# The Conduit server needs all /_matrix/ requests to be reachable at
# https://your.server.name/ on port 443 (client-server) and 8448 (federation).
# If that's not possible for you, you can create /.well-known files to redirect
# requests. See
# https://matrix.org/docs/spec/client_server/latest#get-well-known-matrix-client
# and https://matrix.org/docs/spec/server_server/r0.1.4#get-well-known-matrix-server
# for more information
# and
# https://matrix.org/docs/spec/server_server/r0.1.4#get-well-known-matrix-server
# for more information, or continue below to see how conduit can do this for you.
# YOU NEED TO EDIT THIS
#server_name = "your.server.name"
database_backend = "rocksdb"
# This is the only directory where Conduit will save its data
database_path = "/var/lib/conduit/"
database_path = "/var/lib/matrix-conduit/"
# The port Conduit will be running on. You need to set up a reverse proxy in
# your web server (e.g. apache or nginx), so all requests to /_matrix on port
# 443 and 8448 will be forwarded to the Conduit instance running on this port
# Docker users: Don't change this, you'll need to map an external port to this.
port = 6167
# Max size for uploads
@ -25,24 +38,37 @@ max_request_size = 20_000_000 # in bytes
# Enables registration. If set to false, no users can register on this server.
allow_registration = true
# Disable encryption, so no new encrypted rooms can be created
# Note: existing rooms will continue to work
#allow_encryption = false
#allow_federation = false
# A static registration token that new users will have to provide when creating
# an account. YOU NEED TO EDIT THIS.
# - Insert a password that users will have to enter on registration
# - Start the line with '#' to remove the condition
registration_token = ""
# Enable jaeger to support monitoring and troubleshooting through jaeger
#allow_jaeger = false
allow_check_for_updates = true
allow_federation = true
# Enable the display name lightning bolt on registration.
enable_lightning_bolt = true
# Servers listed here will be used to gather public keys of other servers.
# Generally, copying this exactly should be enough. (Currently, Conduit doesn't
# support batched key requests, so this list should only contain Synapse
# servers.)
trusted_servers = ["matrix.org"]
#max_concurrent_requests = 100 # How many requests Conduit sends to other servers at the same time
#log = "info,state_res=warn,rocket=off,_=off,sled=off"
#workers = 4 # default: cpu core count * 2
# Controls the log verbosity. See also [here][0].
#
# [0]: https://docs.rs/tracing-subscriber/latest/tracing_subscriber/filter/struct.EnvFilter.html#directives
#log = "..."
address = "127.0.0.1" # This makes sure Conduit can only be reached using the reverse proxy
#address = "0.0.0.0" # If Conduit is running in a container, make sure the reverse proxy (ie. Traefik) can reach it.
proxy = "none" # more examples can be found at src/database/proxy.rs:6
# The total amount of memory that the database will use.
#db_cache_capacity_mb = 200
[global.well_known]
# Conduit handles the /.well-known/matrix/* endpoints, making both clients and servers try to access conduit with the host
# server_name and port 443 by default.
# If you want to override these defaults, uncomment and edit the following lines accordingly:
#server = your.server.name:443
#client = https://your.server.name

View file

@ -1,28 +1,36 @@
Conduit for Debian
==================
Installation
------------
Information about downloading, building and deploying the Debian package, see
the "Installing Conduit" section in the Deploying docs.
All following sections until "Setting up the Reverse Proxy" be ignored because
this is handled automatically by the packaging.
Configuration
-------------
When installed, Debconf generates the configuration of the homeserver
(host)name, the address and port it listens on. This configuration ends up in
/etc/matrix-conduit/conduit.toml.
`/etc/matrix-conduit/conduit.toml`.
You can tweak more detailed settings by uncommenting and setting the variables
in /etc/matrix-conduit/conduit.toml. This involves settings such as the maximum
in `/etc/matrix-conduit/conduit.toml`. This involves settings such as the maximum
file size for download/upload, enabling federation, etc.
Running
-------
The package uses the matrix-conduit.service systemd unit file to start and
The package uses the `matrix-conduit.service` systemd unit file to start and
stop Conduit. It loads the configuration file mentioned above to set up the
environment before running the server.
This package assumes by default that Conduit will be placed behind a reverse
proxy such as Apache or nginx. This default deployment entails just listening
on 127.0.0.1 and the free port 6167 and is reachable via a client using the URL
http://localhost:6167.
on `127.0.0.1` and the free port `6167` and is reachable via a client using the URL
<http://localhost:6167>.
At a later stage this packaging may support also setting up TLS and running
stand-alone. In this case, however, you need to set up some certificates and

View file

@ -3,6 +3,7 @@ Description=Conduit Matrix homeserver
After=network.target
[Service]
DynamicUser=yes
User=_matrix-conduit
Group=_matrix-conduit
Type=simple

65
debian/postinst vendored
View file

@ -5,7 +5,7 @@ set -e
CONDUIT_CONFIG_PATH=/etc/matrix-conduit
CONDUIT_CONFIG_FILE="${CONDUIT_CONFIG_PATH}/conduit.toml"
CONDUIT_DATABASE_PATH=/var/lib/matrix-conduit/conduit_db
CONDUIT_DATABASE_PATH=/var/lib/matrix-conduit/
case "$1" in
configure)
@ -19,11 +19,11 @@ case "$1" in
_matrix-conduit
fi
# Create the database path if it does not exist yet.
if [ ! -d "$CONDUIT_DATABASE_PATH" ]; then
mkdir -p "$CONDUIT_DATABASE_PATH"
chown _matrix-conduit "$CONDUIT_DATABASE_PATH"
fi
# Create the database path if it does not exist yet and fix up ownership
# and permissions.
mkdir -p "$CONDUIT_DATABASE_PATH"
chown _matrix-conduit "$CONDUIT_DATABASE_PATH"
chmod 700 "$CONDUIT_DATABASE_PATH"
if [ ! -e "$CONDUIT_CONFIG_FILE" ]; then
# Write the debconf values in the config.
@ -36,18 +36,24 @@ case "$1" in
mkdir -p "$CONDUIT_CONFIG_PATH"
cat > "$CONDUIT_CONFIG_FILE" << EOF
[global]
# The server_name is the name of this server. It is used as a suffix for user
# and room ids. Examples: matrix.org, conduit.rs
# The Conduit server needs to be reachable at https://your.server.name/ on port
# 443 (client-server) and 8448 (federation) OR you can create /.well-known
# files to redirect requests. See
# The server_name is the pretty name of this server. It is used as a suffix for
# user and room ids. Examples: matrix.org, conduit.rs
# The Conduit server needs all /_matrix/ requests to be reachable at
# https://your.server.name/ on port 443 (client-server) and 8448 (federation).
# If that's not possible for you, you can create /.well-known files to redirect
# requests. See
# https://matrix.org/docs/spec/client_server/latest#get-well-known-matrix-client
# and https://matrix.org/docs/spec/server_server/r0.1.4#get-well-known-matrix-server
# for more information.
# and
# https://matrix.org/docs/spec/server_server/r0.1.4#get-well-known-matrix-server
# for more information
server_name = "${CONDUIT_SERVER_NAME}"
# This is the only directory where Conduit will save its data.
database_path = "${CONDUIT_DATABASE_PATH}"
database_backend = "rocksdb"
# The address Conduit will be listening on.
# By default the server listens on address 0.0.0.0. Change this to 127.0.0.1 to
@ -56,7 +62,8 @@ address = "${CONDUIT_ADDRESS}"
# The port Conduit will be running on. You need to set up a reverse proxy in
# your web server (e.g. apache or nginx), so all requests to /_matrix on port
# 443 and 8448 will be forwarded to the Conduit instance running on this port.
# 443 and 8448 will be forwarded to the Conduit instance running on this port
# Docker users: Don't change this, you'll need to map an external port to this.
port = ${CONDUIT_PORT}
# Max size for uploads
@ -65,20 +72,30 @@ max_request_size = 20_000_000 # in bytes
# Enables registration. If set to false, no users can register on this server.
allow_registration = true
# Disable encryption, so no new encrypted rooms can be created.
# Note: Existing rooms will continue to work.
#allow_encryption = false
#allow_federation = false
# A static registration token that new users will have to provide when creating
# an account.
# - Insert a password that users will have to enter on registration
# - Start the line with '#' to remove the condition
#registration_token = ""
# Enable jaeger to support monitoring and troubleshooting through jaeger.
#allow_jaeger = false
allow_federation = true
allow_check_for_updates = true
# Enable the display name lightning bolt on registration.
enable_lightning_bolt = true
# Servers listed here will be used to gather public keys of other servers.
# Generally, copying this exactly should be enough. (Currently, Conduit doesn't
# support batched key requests, so this list should only contain Synapse
# servers.)
trusted_servers = ["matrix.org"]
#max_concurrent_requests = 100 # How many requests Conduit sends to other servers at the same time
#log = "info,state_res=warn,rocket=off,_=off,sled=off"
#workers = 4 # default: cpu core count * 2
# The total amount of memory that the database will use.
#db_cache_capacity_mb = 200
# Controls the log verbosity. See also [here][0].
#
# [0]: https://docs.rs/tracing-subscriber/latest/tracing_subscriber/filter/struct.EnvFilter.html#directives
#log = "..."
EOF
fi
;;

10
default.nix Normal file
View file

@ -0,0 +1,10 @@
(import
(
let lock = builtins.fromJSON (builtins.readFile ./flake.lock); in
fetchTarball {
url = lock.nodes.flake-compat.locked.url or "https://github.com/edolstra/flake-compat/archive/${lock.nodes.flake-compat.locked.rev}.tar.gz";
sha256 = lock.nodes.flake-compat.locked.narHash;
}
)
{ src = ./.; }
).defaultNix

View file

@ -1,121 +0,0 @@
# Deploy using Docker
> **Note:** To run and use Conduit you should probably use it with a Domain or Subdomain behind a reverse proxy (like Nginx, Traefik, Apache, ...) with a Lets Encrypt certificate.
## Docker
### Build & Dockerfile
The Dockerfile provided by Conduit has two stages, each of which creates an image.
1. **Builder:** Builds the binary from local context or by cloning a git revision from the official repository.
2. **Runner:** Copies the built binary from **Builder** and sets up the runtime environment, like creating a volume to persist the database and applying the correct permissions.
To build the image you can use the following command
```bash
docker build --tag matrixconduit/matrix-conduit:latest .
```
which also will tag the resulting image as `matrixconduit/matrix-conduit:latest`.
### Run
After building the image you can simply run it with
```bash
docker run -d -p 8448:6167 -v ~/conduit.toml:/srv/conduit/conduit.toml -v db:/srv/conduit/.local/share/conduit matrixconduit/matrix-conduit:latest
```
or you can skip the build step and pull the image from one of the following registries:
| Registry | Image | Size |
| --------------- | --------------------------------------------------------------- | --------------------- |
| Docker Hub | [matrixconduit/matrix-conduit:latest][dh] | ![Image Size][shield] |
| GitLab Registry | [registry.gitlab.com/famedly/conduit/matrix-conduit:latest][gl] | ![Image Size][shield] |
[dh]: https://hub.docker.com/r/matrixconduit/matrix-conduit
[gl]: https://gitlab.com/famedly/conduit/container_registry/
[shield]: https://img.shields.io/docker/image-size/matrixconduit/matrix-conduit/latest
The `-d` flag lets the container run in detached mode. You now need to supply a `conduit.toml` config file, an example can be found [here](../conduit-example.toml).
You can pass in different env vars to change config values on the fly. You can even configure Conduit completely by using env vars, but for that you need
to pass `-e CONDUIT_CONFIG=""` into your container. For an overview of possible values, please take a look at the `docker-compose.yml` file.
If you just want to test Conduit for a short time, you can use the `--rm` flag, which will clean up everything related to your container after you stop it.
## Docker-compose
If the docker command is not for you or your setup, you can also use one of the provided `docker-compose` files. Depending on your proxy setup, use the [`docker-compose.traefik.yml`](docker-compose.traefik.yml) and [`docker-compose.override.traefik.yml`](docker-compose.override.traefik.yml) for Traefik (don't forget to remove `.traefik` from the filenames) or the normal [`docker-compose.yml`](../docker-compose.yml) for every other reverse proxy. Additional info about deploying
Conduit can be found [here](../DEPLOY.md).
### Build
To build the Conduit image with docker-compose, you first need to open and modify the `docker-compose.yml` file. There you need to comment the `image:` option and uncomment the `build:` option. Then call docker-compose with:
```bash
docker-compose up
```
This will also start the container right afterwards, so if want it to run in detached mode, you also should use the `-d` flag.
### Run
If you already have built the image or want to use one from the registries, you can just start the container and everything else in the compose file in detached mode with:
```bash
docker-compose up -d
```
> **Note:** Don't forget to modify and adjust the compose file to your needs.
### Use Traefik as Proxy
As a container user, you probably know about Traefik. It is a easy to use reverse proxy for making containerized app and services available through the web. With the
two provided files, [`docker-compose.traefik.yml`](docker-compose.traefik.yml) and [`docker-compose.override.traefik.yml`](docker-compose.override.traefik.yml), it is
equally easy to deploy and use Conduit, with a little caveat. If you already took a look at the files, then you should have seen the `well-known` service, and that is
the little caveat. Traefik is simply a proxy and loadbalancer and is not able to serve any kind of content, but for Conduit to federate, we need to either expose ports
`443` and `8448` or serve two endpoints `.well-known/matrix/client` and `.well-known/matrix/server`.
With the service `well-known` we use a single `nginx` container that will serve those two files.
So...step by step:
1. Copy [`docker-compose.traefik.yml`](docker-compose.traefik.yml) and [`docker-compose.override.traefik.yml`](docker-compose.override.traefik.yml) from the repository and remove `.traefik` from the filenames.
2. Open both files and modify/adjust them to your needs. Meaning, change the `CONDUIT_SERVER_NAME` and the volume host mappings according to your needs.
3. Create the `conduit.toml` config file, an example can be found [here](../conduit-example.toml), or set `CONDUIT_CONFIG=""` and configure Conduit per env vars.
4. Uncomment the `element-web` service if you want to host your own Element Web Client and create a `element_config.json`.
5. Create the files needed by the `well-known` service.
- `./nginx/matrix.conf` (relative to the compose file, you can change this, but then also need to change the volume mapping)
```nginx
server {
server_name <SUBDOMAIN>.<DOMAIN>;
listen 80 default_server;
location /.well-known/matrix/ {
root /var/www;
default_type application/json;
add_header Access-Control-Allow-Origin *;
}
}
```
- `./nginx/www/.well-known/matrix/client` (relative to the compose file, you can change this, but then also need to change the volume mapping)
```json
{
"m.homeserver": {
"base_url": "https://<SUBDOMAIN>.<DOMAIN>"
}
}
```
- `./nginx/www/.well-known/matrix/server` (relative to the compose file, you can change this, but then also need to change the volume mapping)
```json
{
"m.server": "<SUBDOMAIN>.<DOMAIN>:443"
}
```
6. Run `docker-compose up -d`
7. Connect to your homeserver with your preferred client and create a user. You should do this immediatly after starting Conduit, because the first created user is the admin.

View file

@ -1,29 +1,34 @@
# syntax=docker/dockerfile:1
# ---------------------------------------------------------------------------------------------------------
# This Dockerfile is intended to be built as part of Conduit's CI pipeline.
# It does not build Conduit in Docker, but just copies the matching build artifact from the build job.
# As a consequence, this is not a multiarch capable image. It always expects and packages a x86_64 binary.
# It does not build Conduit in Docker, but just copies the matching build artifact from the build jobs.
#
# It is mostly based on the normal Conduit Dockerfile, but adjusted in a few places to maximise caching.
# Credit's for the original Dockerfile: Weasy666.
# ---------------------------------------------------------------------------------------------------------
FROM docker.io/alpine:3.14 AS runner
FROM docker.io/alpine:3.16.0@sha256:4ff3ca91275773af45cb4b0834e12b7eb47d1c18f770a0b151381cd227f4c253 AS runner
# Standard port on which Conduit launches.
# You still need to map the port when using the docker command or docker-compose.
EXPOSE 6167
# Note from @jfowl: I would like to remove this in the future and just have the Docker version be configured with envs.
ENV CONDUIT_CONFIG="/srv/conduit/conduit.toml"
# Users are expected to mount a volume to this directory:
ARG DEFAULT_DB_PATH=/var/lib/matrix-conduit
ENV CONDUIT_PORT=6167 \
CONDUIT_ADDRESS="0.0.0.0" \
CONDUIT_DATABASE_PATH=${DEFAULT_DB_PATH} \
CONDUIT_CONFIG=''
# └─> Set no config file to do all configuration with env vars
# Conduit needs:
# ca-certificates: for https
# libgcc: Apparently this is needed, even if I (@jfowl) don't know exactly why. But whatever, it's not that big.
# iproute2: for `ss` for the healthcheck script
RUN apk add --no-cache \
ca-certificates \
libgcc
ca-certificates \
iproute2
ARG CREATED
ARG VERSION
@ -31,49 +36,49 @@ ARG GIT_REF
# Labels according to https://github.com/opencontainers/image-spec/blob/master/annotations.md
# including a custom label specifying the build command
LABEL org.opencontainers.image.created=${CREATED} \
org.opencontainers.image.authors="Conduit Contributors" \
org.opencontainers.image.title="Conduit" \
org.opencontainers.image.version=${VERSION} \
org.opencontainers.image.vendor="Conduit Contributors" \
org.opencontainers.image.description="A Matrix homeserver written in Rust" \
org.opencontainers.image.url="https://conduit.rs/" \
org.opencontainers.image.revision=${GIT_REF} \
org.opencontainers.image.source="https://gitlab.com/famedly/conduit.git" \
org.opencontainers.image.licenses="Apache-2.0" \
org.opencontainers.image.documentation="https://gitlab.com/famedly/conduit" \
org.opencontainers.image.ref.name=""
org.opencontainers.image.authors="Conduit Contributors" \
org.opencontainers.image.title="Conduit" \
org.opencontainers.image.version=${VERSION} \
org.opencontainers.image.vendor="Conduit Contributors" \
org.opencontainers.image.description="A Matrix homeserver written in Rust" \
org.opencontainers.image.url="https://conduit.rs/" \
org.opencontainers.image.revision=${GIT_REF} \
org.opencontainers.image.source="https://gitlab.com/famedly/conduit.git" \
org.opencontainers.image.licenses="Apache-2.0" \
org.opencontainers.image.documentation="https://gitlab.com/famedly/conduit" \
org.opencontainers.image.ref.name=""
# Created directory for the database and media files
RUN mkdir -p /srv/conduit/.local/share/conduit
# Test if Conduit is still alive, uses the same endpoint as Element
COPY ./docker/healthcheck.sh /srv/conduit/healthcheck.sh
HEALTHCHECK --start-period=5s --interval=5s CMD ./healthcheck.sh
# Depending on the target platform (e.g. "linux/arm/v7", "linux/arm64/v8", or "linux/amd64")
# copy the matching binary into this docker image
ARG TARGETPLATFORM
COPY ./$TARGETPLATFORM /srv/conduit/conduit
# Improve security: Don't run stuff as root, that does not need to run as root:
# Add www-data user and group with UID 82, as used by alpine
# https://git.alpinelinux.org/aports/tree/main/nginx/nginx.pre-install
# Most distros also use 1000:1000 for the first real user, so this should resolve volume mounting problems.
ARG USER_ID=1000
ARG GROUP_ID=1000
RUN set -x ; \
addgroup -Sg 82 www-data 2>/dev/null ; \
adduser -S -D -H -h /srv/conduit -G www-data -g www-data www-data 2>/dev/null ; \
addgroup www-data www-data 2>/dev/null && exit 0 ; exit 1
deluser --remove-home www-data ; \
addgroup -S -g ${GROUP_ID} conduit 2>/dev/null ; \
adduser -S -u ${USER_ID} -D -H -h /srv/conduit -G conduit -g conduit conduit 2>/dev/null ; \
addgroup conduit conduit 2>/dev/null && exit 0 ; exit 1
# Change ownership of Conduit files to www-data user and group
RUN chown -cR www-data:www-data /srv/conduit
RUN chmod +x /srv/conduit/healthcheck.sh
# Change ownership of Conduit files to conduit user and group
RUN chown -cR conduit:conduit /srv/conduit && \
chmod +x /srv/conduit/healthcheck.sh && \
mkdir -p ${DEFAULT_DB_PATH} && \
chown -cR conduit:conduit ${DEFAULT_DB_PATH}
# Change user to www-data
USER www-data
# Change user to conduit
USER conduit
# Set container home directory
WORKDIR /srv/conduit
# Run Conduit and print backtraces on panics
ENV RUST_BACKTRACE=1
ENTRYPOINT [ "/srv/conduit/conduit" ]
# Depending on the target platform (e.g. "linux/arm/v7", "linux/arm64/v8", or "linux/amd64")
# copy the matching binary into this docker image
ARG TARGETPLATFORM
COPY --chown=conduit:conduit ./$TARGETPLATFORM /srv/conduit/conduit

View file

@ -1,13 +1,19 @@
#!/bin/sh
# If the port is not specified as env var, take it from the config file
if [ -z ${CONDUIT_PORT} ]; then
CONDUIT_PORT=$(grep -m1 -o 'port\s=\s[0-9]*' conduit.toml | grep -m1 -o '[0-9]*')
# If the config file does not contain a default port and the CONDUIT_PORT env is not set, create
# try to get port from process list
if [ -z "${CONDUIT_PORT}" ]; then
CONDUIT_PORT=$(ss -tlpn | grep conduit | grep -m1 -o ':[0-9]*' | grep -m1 -o '[0-9]*')
fi
# If CONDUIT_ADDRESS is not set try to get the address from the process list
if [ -z "${CONDUIT_ADDRESS}" ]; then
CONDUIT_ADDRESS=$(ss -tlpn | awk -F ' +|:' '/conduit/ { print $4 }')
fi
# The actual health check.
# We try to first get a response on HTTP and when that fails on HTTPS and when that fails, we exit with code 1.
# TODO: Change this to a single wget call. Do we have a config value that we can check for that?
wget --no-verbose --tries=1 --spider "http://localhost:${CONDUIT_PORT}/_matrix/client/versions" || \
wget --no-verbose --tries=1 --spider "https://localhost:${CONDUIT_PORT}/_matrix/client/versions" || \
wget --no-verbose --tries=1 --spider "http://${CONDUIT_ADDRESS}:${CONDUIT_PORT}/_matrix/client/versions" || \
wget --no-verbose --tries=1 --spider "https://${CONDUIT_ADDRESS}:${CONDUIT_PORT}/_matrix/client/versions" || \
exit 1

14
docs/SUMMARY.md Normal file
View file

@ -0,0 +1,14 @@
# Summary
- [Introduction](introduction.md)
- [Configuration](configuration.md)
- [Delegation](delegation.md)
- [Deploying](deploying.md)
- [Generic](deploying/generic.md)
- [Debian](deploying/debian.md)
- [Docker](deploying/docker.md)
- [NixOS](deploying/nixos.md)
- [TURN](turn.md)
- [Appservices](appservices.md)
- [FAQ](faq.md)

61
docs/appservices.md Normal file
View file

@ -0,0 +1,61 @@
# Setting up Appservices
## Getting help
If you run into any problems while setting up an Appservice, write an email to `timo@koesters.xyz`, ask us in [#conduit:fachschaften.org](https://matrix.to/#/#conduit:fachschaften.org) or [open an issue on GitLab](https://gitlab.com/famedly/conduit/-/issues/new).
## Set up the appservice - general instructions
Follow whatever instructions are given by the appservice. This usually includes
downloading, changing its config (setting domain, homeserver url, port etc.)
and later starting it.
At some point the appservice guide should ask you to add a registration yaml
file to the homeserver. In Synapse you would do this by adding the path to the
homeserver.yaml, but in Conduit you can do this from within Matrix:
First, go into the #admins room of your homeserver. The first person that
registered on the homeserver automatically joins it. Then send a message into
the room like this:
@conduit:your.server.name: register-appservice
```
paste
the
contents
of
the
yaml
registration
here
```
You can confirm it worked by sending a message like this:
`@conduit:your.server.name: list-appservices`
The @conduit bot should answer with `Appservices (1): your-bridge`
Then you are done. Conduit will send messages to the appservices and the
appservice can send requests to the homeserver. You don't need to restart
Conduit, but if it doesn't work, restarting while the appservice is running
could help.
## Appservice-specific instructions
### Remove an appservice
To remove an appservice go to your admin room and execute
`@conduit:your.server.name: unregister-appservice <name>`
where `<name>` one of the output of `list-appservices`.
### Tested appservices
These appservices have been tested and work with Conduit without any extra steps:
- [matrix-appservice-discord](https://github.com/Half-Shot/matrix-appservice-discord)
- [mautrix-hangouts](https://github.com/mautrix/hangouts/)
- [mautrix-telegram](https://github.com/mautrix/telegram/)
- [mautrix-signal](https://github.com/mautrix/signal/) from version `0.2.2` forward.
- [heisenbridge](https://github.com/hifi/heisenbridge/)

114
docs/configuration.md Normal file
View file

@ -0,0 +1,114 @@
# Configuration
**Conduit** is configured using a TOML file. The configuration file is loaded from the path specified by the `CONDUIT_CONFIG` environment variable.
> **Note:** The configuration file is required to run Conduit. If the `CONDUIT_CONFIG` environment variable is not set, Conduit will exit with an error.
> **Note:** If you update the configuration file, you must restart Conduit for the changes to take effect
> **Note:** You can also configure Conduit by using `CONDUIT_{field_name}` environment variables. To set values inside a table, use `CONDUIT_{table_name}__{field_name}`. Example: `CONDUIT_SERVER_NAME="example.org"`
Conduit's configuration file is divided into the following sections:
- [Global](#global)
- [TLS](#tls)
- [Proxy](#proxy)
## Global
The `global` section contains the following fields:
> **Note:** The `*` symbol indicates that the field is required, and the values in **parentheses** are the possible values
| Field | Type | Description | Default |
| --- | --- | --- | --- |
| `address` | `string` | The address to bind to | `"127.0.0.1"` |
| `port` | `integer` | The port to bind to | `8000` |
| `tls` | `table` | See the [TLS configuration](#tls) | N/A |
| `server_name`_*_ | `string` | The server name | N/A |
| `database_backend`_*_ | `string` | The database backend to use (`"rocksdb"` *recommended*, `"sqlite"`) | N/A |
| `database_path`_*_ | `string` | The path to the database file/dir | N/A |
| `db_cache_capacity_mb` | `float` | The cache capacity, in MB | `300.0` |
| `enable_lightning_bolt` | `boolean` | Add `⚡️` emoji to end of user's display name | `true` |
| `allow_check_for_updates` | `boolean` | Allow Conduit to check for updates | `true` |
| `conduit_cache_capacity_modifier` | `float` | The value to multiply the default cache capacity by | `1.0` |
| `rocksdb_max_open_files` | `integer` | The maximum number of open files | `1000` |
| `pdu_cache_capacity` | `integer` | The maximum number of Persisted Data Units (PDUs) to cache | `150000` |
| `cleanup_second_interval` | `integer` | How often conduit should clean up the database, in seconds | `60` |
| `max_request_size` | `integer` | The maximum request size, in bytes | `20971520` (20 MiB) |
| `max_concurrent_requests` | `integer` | The maximum number of concurrent requests | `100` |
| `max_fetch_prev_events` | `integer` | The maximum number of previous events to fetch per request if conduit notices events are missing | `100` |
| `allow_registration` | `boolean` | Opens your homeserver to public registration | `false` |
| `registration_token` | `string` | The token users need to have when registering to your homeserver | N/A |
| `allow_encryption` | `boolean` | Allow users to enable encryption in their rooms | `true` |
| `allow_federation` | `boolean` | Allow federation with other servers | `true` |
| `allow_room_creation` | `boolean` | Allow users to create rooms | `true` |
| `allow_unstable_room_versions` | `boolean` | Allow users to create and join rooms with unstable versions | `true` |
| `default_room_version` | `string` | The default room version (`"6"`-`"10"`)| `"10"` |
| `allow_jaeger` | `boolean` | Allow Jaeger tracing | `false` |
| `tracing_flame` | `boolean` | Enable flame tracing | `false` |
| `proxy` | `table` | See the [Proxy configuration](#proxy) | N/A |
| `jwt_secret` | `string` | The secret used in the JWT to enable JWT login without it a 400 error will be returned | N/A |
| `trusted_servers` | `array` | The list of trusted servers to gather public keys of offline servers | `["matrix.org"]` |
| `log` | `string` | The log verbosity to use | `"warn"` |
| `turn_username` | `string` | The TURN username | `""` |
| `turn_password` | `string` | The TURN password | `""` |
| `turn_uris` | `array` | The TURN URIs | `[]` |
| `turn_secret` | `string` | The TURN secret | `""` |
| `turn_ttl` | `integer` | The TURN TTL in seconds | `86400` |
| `emergency_password` | `string` | Set a password to login as the `conduit` user in case of emergency | N/A |
| `well_known_client` | `string` | Used for [delegation](delegation.md) | See [delegation](delegation.md) |
| `well_known_server` | `string` | Used for [delegation](delegation.md) | See [delegation](delegation.md) |
### TLS
The `tls` table contains the following fields:
- `certs`: The path to the public PEM certificate
- `key`: The path to the PEM private key
#### Example
```toml
[global.tls]
certs = "/path/to/cert.pem"
key = "/path/to/key.pem"
```
### Proxy
You can choose what requests conduit should proxy (if any). The `proxy` table contains the following fields
#### Global
The global option will proxy all outgoing requests. The `global` table contains the following fields:
- `url`: The URL of the proxy server
##### Example
```toml
[global.proxy.global]
url = "https://example.com"
```
#### By domain
An array of tables that contain the following fields:
- `url`: The URL of the proxy server
- `include`: Domains that should be proxied (assumed to be `["*"]` if unset)
- `exclude`: Domains that should not be proxied (takes precedent over `include`)
Both `include` and `exclude` allow for glob pattern matching.
##### Example
In this example, all requests to domains ending in `.onion` and `matrix.secretly-an-onion-domain.xyz`
will be proxied via `socks://localhost:9050`, except for domains ending in `.myspecial.onion`. You can add as many `by_domain` tables as you need.
```toml
[[global.proxy.by_domain]]
url = "socks5://localhost:9050"
include = ["*.onion", "matrix.secretly-an-onion-domain.xyz"]
exclude = ["*.clearnet.onion"]
```
### Example
> **Note:** The following example is a minimal configuration file. You should replace the values with your own.
```toml
[global]
{{#include ../conduit-example.toml:22:}}
```

69
docs/delegation.md Normal file
View file

@ -0,0 +1,69 @@
# Delegation
You can run Conduit on a separate domain than the actual server name (what shows up in user ids, aliases, etc.).
For example you can have your users have IDs such as `@foo:example.org` and have aliases like `#bar:example.org`,
while actually having Conduit hosted on the `matrix.example.org` domain. This is called delegation.
## Automatic (recommended)
Conduit has support for hosting delegation files by itself, and by default uses it to serve federation traffic on port 443.
With this method, you need to direct requests to `/.well-known/matrix/*` to Conduit in your reverse proxy.
This is only recommended if Conduit is on the same physical server as the server which serves your server name (e.g. example.org)
as servers don't always seem to cache the response, leading to slower response times otherwise, but it should also work if you
are connected to the server running Conduit using something like a VPN.
> **Note**: this will automatically allow you to use [sliding sync][0] without any extra configuration
To configure it, use the following options:
| Field | Type | Description | Default |
| --- | --- | --- | --- |
| `well_known_client` | `String` | The URL that clients should use to connect to Conduit | `https://<server_name>` |
| `well_known_server` | `String` | The hostname and port servers should use to connect to Conduit | `<server_name>:443` |
### Example
```toml
[global]
well_known_client = "https://matrix.example.org"
well_known_server = "matrix.example.org:443"
```
## Manual
Alternatively you can serve static JSON files to inform clients and servers how to connect to Conduit.
### Servers
For servers to discover how to access your domain, serve a response in the following format for `/.well-known/matrix/server`:
```json
{
"m.server": "matrix.example.org:443"
}
```
Where `matrix.example.org` is the domain and `443` is the port Conduit is accessible at.
### Clients
For clients to discover how to access your domain, serve a response in the following format for `/.well-known/matrix/client`:
```json
{
"m.homeserver": {
"base_url": "https://matrix.example.org"
}
}
```
Where `matrix.example.org` is the URL Conduit is accessible at.
To ensure that all clients can access this endpoint, it is recommended you set the following headers for this endpoint:
```
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS
Access-Control-Allow-Headers: X-Requested-With, Content-Type, Authorization
```
If you also want to be able to use [sliding sync][0], look [here](faq.md#how-do-i-setup-sliding-sync).
[0]: https://matrix.org/blog/2023/09/matrix-2-0/#sliding-sync

3
docs/deploying.md Normal file
View file

@ -0,0 +1,3 @@
# Deploying
This chapter describes various ways to deploy Conduit.

1
docs/deploying/debian.md Normal file
View file

@ -0,0 +1 @@
{{#include ../../debian/README.md}}

View file

@ -0,0 +1,69 @@
# Conduit - Behind Traefik Reverse Proxy
version: '3'
services:
homeserver:
### If you already built the Conduit image with 'docker build' or want to use the Docker Hub image,
### then you are ready to go.
image: matrixconduit/matrix-conduit:latest
### If you want to build a fresh image from the sources, then comment the image line and uncomment the
### build lines. If you want meaningful labels in your built Conduit image, you should run docker compose like this:
### CREATED=$(date -u +'%Y-%m-%dT%H:%M:%SZ') VERSION=$(grep -m1 -o '[0-9].[0-9].[0-9]' Cargo.toml) docker compose up -d
# build:
# context: .
# args:
# CREATED: '2021-03-16T08:18:27Z'
# VERSION: '0.1.0'
# LOCAL: 'false'
# GIT_REF: origin/master
restart: unless-stopped
volumes:
- db:/var/lib/matrix-conduit/
networks:
- proxy
environment:
CONDUIT_SERVER_NAME: your.server.name # EDIT THIS
CONDUIT_DATABASE_PATH: /var/lib/matrix-conduit/
CONDUIT_DATABASE_BACKEND: rocksdb
CONDUIT_PORT: 6167
CONDUIT_MAX_REQUEST_SIZE: 20000000 # in bytes, ~20 MB
CONDUIT_ALLOW_REGISTRATION: 'true'
#CONDUIT_REGISTRATION_TOKEN: '' # require password for registration
CONDUIT_ALLOW_FEDERATION: 'true'
CONDUIT_ALLOW_CHECK_FOR_UPDATES: 'true'
CONDUIT_TRUSTED_SERVERS: '["matrix.org"]'
#CONDUIT_MAX_CONCURRENT_REQUESTS: 100
CONDUIT_ADDRESS: 0.0.0.0
CONDUIT_CONFIG: '' # Ignore this
# We need some way to server the client and server .well-known json. The simplest way is to use a nginx container
# to serve those two as static files. If you want to use a different way, delete or comment the below service, here
# and in the docker compose override file.
well-known:
image: nginx:latest
restart: unless-stopped
volumes:
- ./nginx/matrix.conf:/etc/nginx/conf.d/matrix.conf # the config to serve the .well-known/matrix files
- ./nginx/www:/var/www/ # location of the client and server .well-known-files
### Uncomment if you want to use your own Element-Web App.
### Note: You need to provide a config.json for Element and you also need a second
### Domain or Subdomain for the communication between Element and Conduit
### Config-Docs: https://github.com/vector-im/element-web/blob/develop/docs/config.md
# element-web:
# image: vectorim/element-web:latest
# restart: unless-stopped
# volumes:
# - ./element_config.json:/app/config.json
# networks:
# - proxy
# depends_on:
# - homeserver
volumes:
db:
networks:
# This is the network Traefik listens to, if your network has a different
# name, don't forget to change it here and in the docker-compose.override.yml
proxy:
external: true

View file

@ -18,7 +18,7 @@ services:
# We need some way to server the client and server .well-known json. The simplest way is to use a nginx container
# to serve those two as static files. If you want to use a different way, delete or comment the below service, here
# and in the docker-compose file.
# and in the docker compose file.
well-known:
labels:
- "traefik.enable=true"

View file

@ -7,8 +7,8 @@ services:
### then you are ready to go.
image: matrixconduit/matrix-conduit:latest
### If you want to build a fresh image from the sources, then comment the image line and uncomment the
### build lines. If you want meaningful labels in your built Conduit image, you should run docker-compose like this:
### CREATED=$(date -u +'%Y-%m-%dT%H:%M:%SZ') VERSION=$(grep -m1 -o '[0-9].[0-9].[0-9]' Cargo.toml) docker-compose up -d
### build lines. If you want meaningful labels in your built Conduit image, you should run docker compose like this:
### CREATED=$(date -u +'%Y-%m-%dT%H:%M:%SZ') VERSION=$(grep -m1 -o '[0-9].[0-9].[0-9]' Cargo.toml) docker compose up -d
# build:
# context: .
# args:
@ -31,19 +31,18 @@ services:
### Uncomment and change values as desired
# CONDUIT_ADDRESS: 0.0.0.0
# CONDUIT_PORT: 6167
# CONDUIT_REGISTRATION_TOKEN: '' # require password for registration
# CONDUIT_CONFIG: '/srv/conduit/conduit.toml' # if you want to configure purely by env vars, set this to an empty string ''
# Available levels are: error, warn, info, debug, trace - more info at: https://docs.rs/env_logger/*/env_logger/#enabling-logging
# CONDUIT_LOG: info # default is: "info,rocket=off,_=off,sled=off"
# CONDUIT_ALLOW_JAEGER: 'false'
# CONDUIT_ALLOW_ENCRYPTION: 'false'
# CONDUIT_ALLOW_FEDERATION: 'false'
# CONDUIT_ALLOW_ENCRYPTION: 'true'
# CONDUIT_ALLOW_FEDERATION: 'true'
# CONDUIT_ALLOW_CHECK_FOR_UPDATES: 'true'
# CONDUIT_DATABASE_PATH: /srv/conduit/.local/share/conduit
# CONDUIT_WORKERS: 10
# CONDUIT_MAX_REQUEST_SIZE: 20_000_000 # in bytes, ~20 MB
# CONDUIT_MAX_REQUEST_SIZE: 20000000 # in bytes, ~20 MB
# We need some way to server the client and server .well-known json. The simplest way is to use a nginx container
# to serve those two as static files. If you want to use a different way, delete or comment the below service, here
# and in the docker-compose override file.
# and in the docker compose override file.
well-known:
image: nginx:latest
restart: unless-stopped
@ -65,11 +64,33 @@ services:
# depends_on:
# - homeserver
traefik:
image: "traefik:latest"
container_name: "traefik"
restart: "unless-stopped"
ports:
- "80:80"
- "443:443"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
# - "./traefik_config:/etc/traefik"
- "acme:/etc/traefik/acme"
labels:
- "traefik.enable=true"
# middleware redirect
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
# global redirect to https
- "traefik.http.routers.redirs.rule=hostregexp(`{host:.+}`)"
- "traefik.http.routers.redirs.entrypoints=http"
- "traefik.http.routers.redirs.middlewares=redirect-to-https"
networks:
- proxy
volumes:
db:
acme:
networks:
# This is the network Traefik listens to, if your network has a different
# name, don't forget to change it here and in the docker-compose.override.yml
proxy:
external: true

View file

@ -7,8 +7,8 @@ services:
### then you are ready to go.
image: matrixconduit/matrix-conduit:latest
### If you want to build a fresh image from the sources, then comment the image line and uncomment the
### build lines. If you want meaningful labels in your built Conduit image, you should run docker-compose like this:
### CREATED=$(date -u +'%Y-%m-%dT%H:%M:%SZ') VERSION=$(grep -m1 -o '[0-9].[0-9].[0-9]' Cargo.toml) docker-compose up -d
### build lines. If you want meaningful labels in your built Conduit image, you should run docker compose like this:
### CREATED=$(date -u +'%Y-%m-%dT%H:%M:%SZ') VERSION=$(grep -m1 -o '[0-9].[0-9].[0-9]' Cargo.toml) docker compose up -d
# build:
# context: .
# args:
@ -20,27 +20,21 @@ services:
ports:
- 8448:6167
volumes:
- db:/srv/conduit/.local/share/conduit
### Uncomment if you want to use conduit.toml to configure Conduit
### Note: Set env vars will override conduit.toml values
# - ./conduit.toml:/srv/conduit/conduit.toml
- db:/var/lib/matrix-conduit/
environment:
CONDUIT_SERVER_NAME: localhost:6167 # replace with your own name
CONDUIT_TRUSTED_SERVERS: '["matrix.org"]'
CONDUIT_SERVER_NAME: your.server.name # EDIT THIS
CONDUIT_DATABASE_PATH: /var/lib/matrix-conduit/
CONDUIT_DATABASE_BACKEND: rocksdb
CONDUIT_PORT: 6167
CONDUIT_MAX_REQUEST_SIZE: 20000000 # in bytes, ~20 MB
CONDUIT_ALLOW_REGISTRATION: 'true'
### Uncomment and change values as desired
# CONDUIT_ADDRESS: 0.0.0.0
# CONDUIT_PORT: 6167
# CONDUIT_CONFIG: '/srv/conduit/conduit.toml' # if you want to configure purely by env vars, set this to an empty string ''
# Available levels are: error, warn, info, debug, trace - more info at: https://docs.rs/env_logger/*/env_logger/#enabling-logging
# CONDUIT_LOG: info # default is: "info,rocket=off,_=off,sled=off"
# CONDUIT_ALLOW_JAEGER: 'false'
# CONDUIT_ALLOW_ENCRYPTION: 'false'
# CONDUIT_ALLOW_FEDERATION: 'false'
# CONDUIT_DATABASE_PATH: /srv/conduit/.local/share/conduit
# CONDUIT_WORKERS: 10
# CONDUIT_MAX_REQUEST_SIZE: 20_000_000 # in bytes, ~20 MB
CONDUIT_ALLOW_FEDERATION: 'true'
CONDUIT_ALLOW_CHECK_FOR_UPDATES: 'true'
CONDUIT_TRUSTED_SERVERS: '["matrix.org"]'
#CONDUIT_MAX_CONCURRENT_REQUESTS: 100
CONDUIT_ADDRESS: 0.0.0.0
CONDUIT_CONFIG: '' # Ignore this
#
### Uncomment if you want to use your own Element-Web App.
### Note: You need to provide a config.json for Element and you also need a second
### Domain or Subdomain for the communication between Element and Conduit
@ -56,4 +50,4 @@ services:
# - homeserver
volumes:
db:
db:

217
docs/deploying/docker.md Normal file
View file

@ -0,0 +1,217 @@
# Conduit for Docker
> **Note:** To run and use Conduit you should probably use it with a Domain or Subdomain behind a reverse proxy (like Nginx, Traefik, Apache, ...) with a Lets Encrypt certificate.
## Docker
To run Conduit with Docker you can either build the image yourself or pull it from a registry.
### Use a registry
OCI images for Conduit are available in the registries listed below. We recommend using the image tagged as `latest` from GitLab's own registry.
| Registry | Image | Size | Notes |
| --------------- | --------------------------------------------------------------- | ----------------------------- | ---------------------- |
| GitLab Registry | [registry.gitlab.com/famedly/conduit/matrix-conduit:latest][gl] | ![Image Size][shield-latest] | Stable image. |
| Docker Hub | [docker.io/matrixconduit/matrix-conduit:latest][dh] | ![Image Size][shield-latest] | Stable image. |
| GitLab Registry | [registry.gitlab.com/famedly/conduit/matrix-conduit:next][gl] | ![Image Size][shield-next] | Development version. |
| Docker Hub | [docker.io/matrixconduit/matrix-conduit:next][dh] | ![Image Size][shield-next] | Development version. |
[dh]: https://hub.docker.com/r/matrixconduit/matrix-conduit
[gl]: https://gitlab.com/famedly/conduit/container_registry/2497937
[shield-latest]: https://img.shields.io/docker/image-size/matrixconduit/matrix-conduit/latest
[shield-next]: https://img.shields.io/docker/image-size/matrixconduit/matrix-conduit/next
Use
```bash
docker image pull <link>
```
to pull it to your machine.
### Build using a dockerfile
The Dockerfile provided by Conduit has two stages, each of which creates an image.
1. **Builder:** Builds the binary from local context or by cloning a git revision from the official repository.
2. **Runner:** Copies the built binary from **Builder** and sets up the runtime environment, like creating a volume to persist the database and applying the correct permissions.
To build the image you can use the following command
```bash
docker build --tag matrixconduit/matrix-conduit:latest .
```
which also will tag the resulting image as `matrixconduit/matrix-conduit:latest`.
### Run
When you have the image you can simply run it with
```bash
docker run -d -p 8448:6167 \
-v db:/var/lib/matrix-conduit/ \
-e CONDUIT_SERVER_NAME="your.server.name" \
-e CONDUIT_DATABASE_BACKEND="rocksdb" \
-e CONDUIT_ALLOW_REGISTRATION=true \
-e CONDUIT_ALLOW_FEDERATION=true \
-e CONDUIT_MAX_REQUEST_SIZE="20000000" \
-e CONDUIT_TRUSTED_SERVERS="[\"matrix.org\"]" \
-e CONDUIT_MAX_CONCURRENT_REQUESTS="100" \
-e CONDUIT_PORT="6167" \
--name conduit <link>
```
or you can use [docker compose](#docker-compose).
The `-d` flag lets the container run in detached mode. You now need to supply a `conduit.toml` config file, an example can be found [here](../configuration.md).
You can pass in different env vars to change config values on the fly. You can even configure Conduit completely by using env vars, but for that you need
to pass `-e CONDUIT_CONFIG=""` into your container. For an overview of possible values, please take a look at the `docker-compose.yml` file.
If you just want to test Conduit for a short time, you can use the `--rm` flag, which will clean up everything related to your container after you stop it.
### Docker compose
If the `docker run` command is not for you or your setup, you can also use one of the provided `docker compose` files.
Depending on your proxy setup, you can use one of the following files;
- If you already have a `traefik` instance set up, use [`docker-compose.for-traefik.yml`](docker-compose.for-traefik.yml)
- If you don't have a `traefik` instance set up (or any other reverse proxy), use [`docker-compose.with-traefik.yml`](docker-compose.with-traefik.yml)
- For any other reverse proxy, use [`docker-compose.yml`](docker-compose.yml)
When picking the traefik-related compose file, rename it so it matches `docker-compose.yml`, and
rename the override file to `docker-compose.override.yml`. Edit the latter with the values you want
for your server.
Additional info about deploying Conduit can be found [here](generic.md).
### Build
To build the Conduit image with docker compose, you first need to open and modify the `docker-compose.yml` file. There you need to comment the `image:` option and uncomment the `build:` option. Then call docker compose with:
```bash
docker compose up
```
This will also start the container right afterwards, so if want it to run in detached mode, you also should use the `-d` flag.
### Run
If you already have built the image or want to use one from the registries, you can just start the container and everything else in the compose file in detached mode with:
```bash
docker compose up -d
```
> **Note:** Don't forget to modify and adjust the compose file to your needs.
### Use Traefik as Proxy
As a container user, you probably know about Traefik. It is a easy to use reverse proxy for making
containerized app and services available through the web. With the two provided files,
[`docker-compose.for-traefik.yml`](docker-compose.for-traefik.yml) (or
[`docker-compose.with-traefik.yml`](docker-compose.with-traefik.yml)) and
[`docker-compose.override.yml`](docker-compose.override.yml), it is equally easy to deploy
and use Conduit, with a little caveat. If you already took a look at the files, then you should have
seen the `well-known` service, and that is the little caveat. Traefik is simply a proxy and
loadbalancer and is not able to serve any kind of content, but for Conduit to federate, we need to
either expose ports `443` and `8448` or serve two endpoints `.well-known/matrix/client` and
`.well-known/matrix/server`.
With the service `well-known` we use a single `nginx` container that will serve those two files.
So...step by step:
1. Copy [`docker-compose.for-traefik.yml`](docker-compose.for-traefik.yml) (or
[`docker-compose.with-traefik.yml`](docker-compose.with-traefik.yml)) and [`docker-compose.override.yml`](docker-compose.override.yml) from the repository and remove `.for-traefik` (or `.with-traefik`) from the filename.
2. Open both files and modify/adjust them to your needs. Meaning, change the `CONDUIT_SERVER_NAME` and the volume host mappings according to your needs.
3. Create the `conduit.toml` config file, an example can be found [here](../configuration.md), or set `CONDUIT_CONFIG=""` and configure Conduit per env vars.
4. Uncomment the `element-web` service if you want to host your own Element Web Client and create a `element_config.json`.
5. Create the files needed by the `well-known` service.
- `./nginx/matrix.conf` (relative to the compose file, you can change this, but then also need to change the volume mapping)
```nginx
server {
server_name <SUBDOMAIN>.<DOMAIN>;
listen 80 default_server;
location /.well-known/matrix/server {
return 200 '{"m.server": "<SUBDOMAIN>.<DOMAIN>:443"}';
types { } default_type "application/json; charset=utf-8";
}
location /.well-known/matrix/client {
return 200 '{"m.homeserver": {"base_url": "https://<SUBDOMAIN>.<DOMAIN>"}}';
types { } default_type "application/json; charset=utf-8";
add_header "Access-Control-Allow-Origin" *;
}
location / {
return 404;
}
}
```
6. Run `docker compose up -d`
7. Connect to your homeserver with your preferred client and create a user. You should do this immediately after starting Conduit, because the first created user is the admin.
## Voice communication
In order to make or receive calls, a TURN server is required. Conduit suggests using [Coturn](https://github.com/coturn/coturn) for this purpose, which is also available as a Docker image. Before proceeding with the software installation, it is essential to have the necessary configurations in place.
### Configuration
Create a configuration file called `coturn.conf` containing:
```conf
use-auth-secret
static-auth-secret=<a secret key>
realm=<your server domain>
```
A common way to generate a suitable alphanumeric secret key is by using `pwgen -s 64 1`.
These same values need to be set in conduit. You can either modify conduit.toml to include these lines:
```
turn_uris = ["turn:<your server domain>?transport=udp", "turn:<your server domain>?transport=tcp"]
turn_secret = "<secret key from coturn configuration>"
```
or append the following to the docker environment variables dependig on which configuration method you used earlier:
```yml
CONDUIT_TURN_URIS: '["turn:<your server domain>?transport=udp", "turn:<your server domain>?transport=tcp"]'
CONDUIT_TURN_SECRET: "<secret key from coturn configuration>"
```
Restart Conduit to apply these changes.
### Run
Run the [Coturn](https://hub.docker.com/r/coturn/coturn) image using
```bash
docker run -d --network=host -v $(pwd)/coturn.conf:/etc/coturn/turnserver.conf coturn/coturn
```
or docker compose. For the latter, paste the following section into a file called `docker-compose.yml`
and run `docker compose up -d` in the same directory.
```yml
version: 3
services:
turn:
container_name: coturn-server
image: docker.io/coturn/coturn
restart: unless-stopped
network_mode: "host"
volumes:
- ./coturn.conf:/etc/coturn/turnserver.conf
```
To understand why the host networking mode is used and explore alternative configuration options, please visit the following link: https://github.com/coturn/coturn/blob/master/docker/coturn/README.md.
For security recommendations see Synapse's [Coturn documentation](https://github.com/matrix-org/synapse/blob/develop/docs/setup/turn/coturn.md#configuration).

289
docs/deploying/generic.md Normal file
View file

@ -0,0 +1,289 @@
# Generic deployment documentation
> ## Getting help
>
> If you run into any problems while setting up Conduit, write an email to `conduit@koesters.xyz`, ask us
> in `#conduit:fachschaften.org` or [open an issue on GitLab](https://gitlab.com/famedly/conduit/-/issues/new).
## Installing Conduit
Although you might be able to compile Conduit for Windows, we do recommend running it on a Linux server. We therefore
only offer Linux binaries.
You may simply download the binary that fits your machine. Run `uname -m` to see what you need. For `arm`, you should use `aarch`. Now copy the appropriate url:
**Stable/Main versions:**
| Target | Type | Download |
|-|-|-|
| `x86_64-unknown-linux-musl` | Statically linked Debian package | [link](https://gitlab.com/api/v4/projects/famedly%2Fconduit/jobs/artifacts/master/raw/x86_64-unknown-linux-musl.deb?job=artifacts) |
| `aarch64-unknown-linux-musl` | Statically linked Debian package | [link](https://gitlab.com/api/v4/projects/famedly%2Fconduit/jobs/artifacts/master/raw/aarch64-unknown-linux-musl.deb?job=artifacts) |
| `x86_64-unknown-linux-musl` | Statically linked binary | [link](https://gitlab.com/api/v4/projects/famedly%2Fconduit/jobs/artifacts/master/raw/x86_64-unknown-linux-musl?job=artifacts) |
| `aarch64-unknown-linux-musl` | Statically linked binary | [link](https://gitlab.com/api/v4/projects/famedly%2Fconduit/jobs/artifacts/master/raw/aarch64-unknown-linux-musl?job=artifacts) |
| `x86_64-unknown-linux-gnu` | OCI image | [link](https://gitlab.com/api/v4/projects/famedly%2Fconduit/jobs/artifacts/master/raw/oci-image-amd64.tar.gz?job=artifacts) |
| `aarch64-unknown-linux-musl` | OCI image | [link](https://gitlab.com/api/v4/projects/famedly%2Fconduit/jobs/artifacts/master/raw/oci-image-arm64v8.tar.gz?job=artifacts) |
These builds were created on and linked against the glibc version shipped with Debian bullseye.
If you use a system with an older glibc version (e.g. RHEL8), you might need to compile Conduit yourself.
**Latest/Next versions:**
| Target | Type | Download |
|-|-|-|
| `x86_64-unknown-linux-musl` | Statically linked Debian package | [link](https://gitlab.com/api/v4/projects/famedly%2Fconduit/jobs/artifacts/next/raw/x86_64-unknown-linux-musl.deb?job=artifacts) |
| `aarch64-unknown-linux-musl` | Statically linked Debian package | [link](https://gitlab.com/api/v4/projects/famedly%2Fconduit/jobs/artifacts/next/raw/aarch64-unknown-linux-musl.deb?job=artifacts) |
| `x86_64-unknown-linux-musl` | Statically linked binary | [link](https://gitlab.com/api/v4/projects/famedly%2Fconduit/jobs/artifacts/next/raw/x86_64-unknown-linux-musl?job=artifacts) |
| `aarch64-unknown-linux-musl` | Statically linked binary | [link](https://gitlab.com/api/v4/projects/famedly%2Fconduit/jobs/artifacts/next/raw/aarch64-unknown-linux-musl?job=artifacts) |
| `x86_64-unknown-linux-gnu` | OCI image | [link](https://gitlab.com/api/v4/projects/famedly%2Fconduit/jobs/artifacts/next/raw/oci-image-amd64.tar.gz?job=artifacts) |
| `aarch64-unknown-linux-musl` | OCI image | [link](https://gitlab.com/api/v4/projects/famedly%2Fconduit/jobs/artifacts/next/raw/oci-image-arm64v8.tar.gz?job=artifacts) |
```bash
$ sudo wget -O /usr/local/bin/matrix-conduit <url>
$ sudo chmod +x /usr/local/bin/matrix-conduit
```
Alternatively, you may compile the binary yourself. First, install any dependencies:
```bash
# Debian
$ sudo apt install libclang-dev build-essential
# RHEL
$ sudo dnf install clang
```
Then, `cd` into the source tree of conduit-next and run:
```bash
$ cargo build --release
```
## Adding a Conduit user
While Conduit can run as any user it is usually better to use dedicated users for different services. This also allows
you to make sure that the file permissions are correctly set up.
In Debian or RHEL, you can use this command to create a Conduit user:
```bash
sudo adduser --system conduit --group --disabled-login --no-create-home
```
## Forwarding ports in the firewall or the router
Conduit uses the ports 443 and 8448 both of which need to be open in the firewall.
If Conduit runs behind a router or in a container and has a different public IP address than the host system these public ports need to be forwarded directly or indirectly to the port mentioned in the config.
## Optional: Avoid port 8448
If Conduit runs behind Cloudflare reverse proxy, which doesn't support port 8448 on free plans, [delegation](https://matrix-org.github.io/synapse/latest/delegate.html) can be set up to have federation traffic routed to port 443:
```apache
# .well-known delegation on Apache
<Files "/.well-known/matrix/server">
ErrorDocument 200 '{"m.server": "your.server.name:443"}'
Header always set Content-Type application/json
Header always set Access-Control-Allow-Origin *
</Files>
```
[SRV DNS record](https://spec.matrix.org/latest/server-server-api/#resolving-server-names) delegation is also [possible](https://www.cloudflare.com/en-gb/learning/dns/dns-records/dns-srv-record/).
## Setting up a systemd service
Now we'll set up a systemd service for Conduit, so it's easy to start/stop Conduit and set it to autostart when your
server reboots. Simply paste the default systemd service you can find below into
`/etc/systemd/system/conduit.service`.
```systemd
[Unit]
Description=Conduit Matrix Server
After=network.target
[Service]
Environment="CONDUIT_CONFIG=/etc/matrix-conduit/conduit.toml"
User=conduit
Group=conduit
Restart=always
ExecStart=/usr/local/bin/matrix-conduit
[Install]
WantedBy=multi-user.target
```
Finally, run
```bash
$ sudo systemctl daemon-reload
```
## Creating the Conduit configuration file
Now we need to create the Conduit's config file in
`/etc/matrix-conduit/conduit.toml`. Paste in the contents of
[`conduit-example.toml`](../configuration.md) **and take a moment to read it.
You need to change at least the server name.**
You can also choose to use a different database backend, but right now only `rocksdb` and `sqlite` are recommended.
## Setting the correct file permissions
As we are using a Conduit specific user we need to allow it to read the config. To do that you can run this command on
Debian or RHEL:
```bash
sudo chown -R root:root /etc/matrix-conduit
sudo chmod 755 /etc/matrix-conduit
```
If you use the default database path you also need to run this:
```bash
sudo mkdir -p /var/lib/matrix-conduit/
sudo chown -R conduit:conduit /var/lib/matrix-conduit/
sudo chmod 700 /var/lib/matrix-conduit/
```
## Setting up the Reverse Proxy
This depends on whether you use Apache, Caddy, Nginx or another web server.
### Apache
Create `/etc/apache2/sites-enabled/050-conduit.conf` and copy-and-paste this:
```apache
# Requires mod_proxy and mod_proxy_http
#
# On Apache instance compiled from source,
# paste into httpd-ssl.conf or httpd.conf
Listen 8448
<VirtualHost *:443 *:8448>
ServerName your.server.name # EDIT THIS
AllowEncodedSlashes NoDecode
ProxyPass /_matrix/ http://127.0.0.1:6167/_matrix/ timeout=300 nocanon
ProxyPassReverse /_matrix/ http://127.0.0.1:6167/_matrix/
</VirtualHost>
```
**You need to make some edits again.** When you are done, run
```bash
# Debian
$ sudo systemctl reload apache2
# Installed from source
$ sudo apachectl -k graceful
```
### Caddy
Create `/etc/caddy/conf.d/conduit_caddyfile` and enter this (substitute for your server name).
```caddy
your.server.name, your.server.name:8448 {
reverse_proxy /_matrix/* 127.0.0.1:6167
}
```
That's it! Just start or enable the service and you're set.
```bash
$ sudo systemctl enable caddy
```
### Nginx
If you use Nginx and not Apache, add the following server section inside the http section of `/etc/nginx/nginx.conf`
```nginx
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
listen 8448 ssl http2;
listen [::]:8448 ssl http2;
server_name your.server.name; # EDIT THIS
merge_slashes off;
# Nginx defaults to only allow 1MB uploads
# Increase this to allow posting large files such as videos
client_max_body_size 20M;
location /_matrix/ {
proxy_pass http://127.0.0.1:6167;
proxy_set_header Host $http_host;
proxy_buffering off;
proxy_read_timeout 5m;
}
ssl_certificate /etc/letsencrypt/live/your.server.name/fullchain.pem; # EDIT THIS
ssl_certificate_key /etc/letsencrypt/live/your.server.name/privkey.pem; # EDIT THIS
ssl_trusted_certificate /etc/letsencrypt/live/your.server.name/chain.pem; # EDIT THIS
include /etc/letsencrypt/options-ssl-nginx.conf;
}
```
**You need to make some edits again.** When you are done, run
```bash
$ sudo systemctl reload nginx
```
## SSL Certificate
If you chose Caddy as your web proxy SSL certificates are handled automatically and you can skip this step.
The easiest way to get an SSL certificate, if you don't have one already, is to [install](https://certbot.eff.org/instructions) `certbot` and run this:
```bash
# To use ECC for the private key,
# paste into /etc/letsencrypt/cli.ini:
# key-type = ecdsa
# elliptic-curve = secp384r1
$ sudo certbot -d your.server.name
```
[Automated renewal](https://eff-certbot.readthedocs.io/en/stable/using.html#automated-renewals) is usually preconfigured.
If using Cloudflare, configure instead the edge and origin certificates in dashboard. In case youre already running a website on the same Apache server, you can just copy-and-paste the SSL configuration from your main virtual host on port 443 into the above-mentioned vhost.
## You're done!
Now you can start Conduit with:
```bash
$ sudo systemctl start conduit
```
Set it to start automatically when your system boots with:
```bash
$ sudo systemctl enable conduit
```
## How do I know it works?
You can open [a Matrix client](https://matrix.org/ecosystem/clients), enter your homeserver and try to register. If you are using a registration token, use [Element web](https://app.element.io/), [Nheko](https://matrix.org/ecosystem/clients/nheko/) or [SchildiChat web](https://app.schildi.chat/), as they support this feature.
You can also use these commands as a quick health check.
```bash
$ curl https://your.server.name/_matrix/client/versions
# If using port 8448
$ curl https://your.server.name:8448/_matrix/client/versions
```
- To check if your server can talk with other homeservers, you can use the [Matrix Federation Tester](https://federationtester.matrix.org/).
If you can register but cannot join federated rooms check your config again and also check if the port 8448 is open and forwarded correctly.
# What's next?
## Audio/Video calls
For Audio/Video call functionality see the [TURN Guide](../turn.md).
## Appservices
If you want to set up an appservice, take a look at the [Appservice Guide](../appservices.md).

18
docs/deploying/nixos.md Normal file
View file

@ -0,0 +1,18 @@
# Conduit for NixOS
Conduit can be acquired by Nix from various places:
* The `flake.nix` at the root of the repo
* The `default.nix` at the root of the repo
* From Nixpkgs
The `flake.nix` and `default.nix` do not (currently) provide a NixOS module, so
(for now) [`services.matrix-conduit`][module] from Nixpkgs should be used to
configure Conduit.
If you want to run the latest code, you should get Conduit from the `flake.nix`
or `default.nix` and set [`services.matrix-conduit.package`][package]
appropriately.
[module]: https://search.nixos.org/options?channel=unstable&query=services.matrix-conduit
[package]: https://search.nixos.org/options?channel=unstable&query=services.matrix-conduit.package

41
docs/faq.md Normal file
View file

@ -0,0 +1,41 @@
# FAQ
Here are some of the most frequently asked questions about Conduit, and their answers.
## Why do I get a `M_INCOMPATIBLE_ROOM_VERSION` error when trying to join some rooms?
Conduit doesn't support room versions 1 and 2 at all, and doesn't properly support versions 3-5 currently. You can track the progress of adding support [here](https://gitlab.com/famedly/conduit/-/issues/433).
## How do I backup my server?
To backup your Conduit server, it's very easy.
You can simply stop Conduit, make a copy or file system snapshot of the database directory, then start Conduit again.
> **Note**: When using a file system snapshot, it is not required that you stop the server, but it is still recommended as it is the safest option and should ensure your database is not left in an inconsistent state.
## How do I setup sliding sync?
If you use the [automatic method for delegation](delegation.md#automatic-recommended) or just proxy `.well-known/matrix/client` to Conduit, sliding sync should work with no extra configuration.
If you don't, continue below.
You need to add a `org.matrix.msc3575.proxy` field to your `.well-known/matrix/client` response which contains a url which Conduit is accessible behind.
Here is an example:
```json
{
~ "m.homeserver": {
~ "base_url": "https://matrix.example.org"
~ },
"org.matrix.msc3575.proxy": {
"url": "https://matrix.example.org"
}
}
```
## Can I migrate from Synapse to Conduit?
Not really. You can reuse the domain of your current server with Conduit, but you will not be able to migrate accounts automatically.
Rooms that were federated can be re-joined via the other participating servers, however media and the like may be deleted from remote servers after some time, and hence might not be recoverable.
## How do I make someone an admin?
Simply invite them to the admin room. Once joined, they can administer the server by interacting with the `@conduit:<server_name>` user.

13
docs/introduction.md Normal file
View file

@ -0,0 +1,13 @@
# Conduit
{{#include ../README.md:catchphrase}}
{{#include ../README.md:body}}
#### How can I deploy my own?
- [Deployment options](deploying.md)
If you want to connect an Appservice to Conduit, take a look at the [appservices documentation](appservices.md).
{{#include ../README.md:footer}}

25
docs/turn.md Normal file
View file

@ -0,0 +1,25 @@
# Setting up TURN/STUN
## General instructions
* It is assumed you have a [Coturn server](https://github.com/coturn/coturn) up and running. See [Synapse reference implementation](https://github.com/element-hq/synapse/blob/develop/docs/turn-howto.md).
## Edit/Add a few settings to your existing conduit.toml
```
# Refer to your Coturn settings.
# `your.turn.url` has to match the REALM setting of your Coturn as well as `transport`.
turn_uris = ["turn:your.turn.url?transport=udp", "turn:your.turn.url?transport=tcp"]
# static-auth-secret of your turnserver
turn_secret = "ADD SECRET HERE"
# If you have your TURN server configured to use a username and password
# you can provide these information too. In this case comment out `turn_secret above`!
#turn_username = ""
#turn_password = ""
```
## Apply settings
Restart Conduit.

79
engage.toml Normal file
View file

@ -0,0 +1,79 @@
interpreter = ["bash", "-euo", "pipefail", "-c"]
[[task]]
group = "versions"
name = "engage"
script = "engage --version"
[[task]]
group = "versions"
name = "rustc"
script = "rustc --version"
[[task]]
group = "versions"
name = "cargo"
script = "cargo --version"
[[task]]
group = "versions"
name = "cargo-fmt"
script = "cargo fmt --version"
[[task]]
group = "versions"
name = "rustdoc"
script = "rustdoc --version"
[[task]]
group = "versions"
name = "cargo-clippy"
script = "cargo clippy -- --version"
[[task]]
group = "versions"
name = "lychee"
script = "lychee --version"
[[task]]
group = "lints"
name = "cargo-fmt"
script = "cargo fmt --check -- --color=always"
[[task]]
group = "lints"
name = "cargo-doc"
script = """
RUSTDOCFLAGS="-D warnings" cargo doc \
--workspace \
--no-deps \
--document-private-items \
--color always
"""
[[task]]
group = "lints"
name = "cargo-clippy"
script = "cargo clippy --workspace --all-targets --color=always -- -D warnings"
[[task]]
group = "lints"
name = "taplo-fmt"
script = "taplo fmt --check --colors always"
[[task]]
group = "lints"
name = "lychee"
script = "lychee --offline docs"
[[task]]
group = "tests"
name = "cargo"
script = """
cargo test \
--workspace \
--all-targets \
--color=always \
-- \
--color=always
"""

263
flake.lock generated Normal file
View file

@ -0,0 +1,263 @@
{
"nodes": {
"attic": {
"inputs": {
"crane": "crane",
"flake-compat": "flake-compat",
"flake-utils": "flake-utils",
"nixpkgs": "nixpkgs",
"nixpkgs-stable": "nixpkgs-stable"
},
"locked": {
"lastModified": 1707922053,
"narHash": "sha256-wSZjK+rOXn+UQiP1NbdNn5/UW6UcBxjvlqr2wh++MbM=",
"owner": "zhaofengli",
"repo": "attic",
"rev": "6eabc3f02fae3683bffab483e614bebfcd476b21",
"type": "github"
},
"original": {
"owner": "zhaofengli",
"ref": "main",
"repo": "attic",
"type": "github"
}
},
"crane": {
"inputs": {
"nixpkgs": [
"attic",
"nixpkgs"
]
},
"locked": {
"lastModified": 1702918879,
"narHash": "sha256-tWJqzajIvYcaRWxn+cLUB9L9Pv4dQ3Bfit/YjU5ze3g=",
"owner": "ipetkov",
"repo": "crane",
"rev": "7195c00c272fdd92fc74e7d5a0a2844b9fadb2fb",
"type": "github"
},
"original": {
"owner": "ipetkov",
"repo": "crane",
"type": "github"
}
},
"crane_2": {
"inputs": {
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1713721181,
"narHash": "sha256-Vz1KRVTzU3ClBfyhOj8gOehZk21q58T1YsXC30V23PU=",
"owner": "ipetkov",
"repo": "crane",
"rev": "55f4939ac59ff8f89c6a4029730a2d49ea09105f",
"type": "github"
},
"original": {
"owner": "ipetkov",
"ref": "master",
"repo": "crane",
"type": "github"
}
},
"fenix": {
"inputs": {
"nixpkgs": [
"nixpkgs"
],
"rust-analyzer-src": "rust-analyzer-src"
},
"locked": {
"lastModified": 1709619709,
"narHash": "sha256-l6EPVJfwfelWST7qWQeP6t/TDK3HHv5uUB1b2vw4mOQ=",
"owner": "nix-community",
"repo": "fenix",
"rev": "c8943ea9e98d41325ff57d4ec14736d330b321b2",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "fenix",
"type": "github"
}
},
"flake-compat": {
"flake": false,
"locked": {
"lastModified": 1673956053,
"narHash": "sha256-4gtG9iQuiKITOjNQQeQIpoIB6b16fm+504Ch3sNKLd8=",
"owner": "edolstra",
"repo": "flake-compat",
"rev": "35bb57c0c8d8b62bbfd284272c928ceb64ddbde9",
"type": "github"
},
"original": {
"owner": "edolstra",
"repo": "flake-compat",
"type": "github"
}
},
"flake-compat_2": {
"flake": false,
"locked": {
"lastModified": 1696426674,
"narHash": "sha256-kvjfFW7WAETZlt09AgDn1MrtKzP7t90Vf7vypd3OL1U=",
"owner": "edolstra",
"repo": "flake-compat",
"rev": "0f9255e01c2351cc7d116c072cb317785dd33b33",
"type": "github"
},
"original": {
"owner": "edolstra",
"repo": "flake-compat",
"type": "github"
}
},
"flake-utils": {
"locked": {
"lastModified": 1667395993,
"narHash": "sha256-nuEHfE/LcWyuSWnS8t12N1wc105Qtau+/OdUAjtQ0rA=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "5aed5285a952e0b949eb3ba02c12fa4fcfef535f",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"flake-utils_2": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1709126324,
"narHash": "sha256-q6EQdSeUZOG26WelxqkmR7kArjgWCdw5sfJVHPH/7j8=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "d465f4819400de7c8d874d50b982301f28a84605",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"nix-filter": {
"locked": {
"lastModified": 1705332318,
"narHash": "sha256-kcw1yFeJe9N4PjQji9ZeX47jg0p9A0DuU4djKvg1a7I=",
"owner": "numtide",
"repo": "nix-filter",
"rev": "3449dc925982ad46246cfc36469baf66e1b64f17",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "nix-filter",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1702539185,
"narHash": "sha256-KnIRG5NMdLIpEkZTnN5zovNYc0hhXjAgv6pfd5Z4c7U=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "aa9d4729cbc99dabacb50e3994dcefb3ea0f7447",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-stable": {
"locked": {
"lastModified": 1702780907,
"narHash": "sha256-blbrBBXjjZt6OKTcYX1jpe9SRof2P9ZYWPzq22tzXAA=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "1e2e384c5b7c50dbf8e9c441a9e58d85f408b01f",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-23.11",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs_2": {
"locked": {
"lastModified": 1709479366,
"narHash": "sha256-n6F0n8UV6lnTZbYPl1A9q1BS0p4hduAv1mGAP17CVd0=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "b8697e57f10292a6165a20f03d2f42920dfaf973",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"attic": "attic",
"crane": "crane_2",
"fenix": "fenix",
"flake-compat": "flake-compat_2",
"flake-utils": "flake-utils_2",
"nix-filter": "nix-filter",
"nixpkgs": "nixpkgs_2"
}
},
"rust-analyzer-src": {
"flake": false,
"locked": {
"lastModified": 1709571018,
"narHash": "sha256-ISFrxHxE0J5g7lDAscbK88hwaT5uewvWoma9TlFmRzM=",
"owner": "rust-lang",
"repo": "rust-analyzer",
"rev": "9f14343f9ee24f53f17492c5f9b653427e2ad15e",
"type": "github"
},
"original": {
"owner": "rust-lang",
"ref": "nightly",
"repo": "rust-analyzer",
"type": "github"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",
"version": 7
}

115
flake.nix Normal file
View file

@ -0,0 +1,115 @@
{
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs?ref=nixos-unstable";
flake-utils.url = "github:numtide/flake-utils";
nix-filter.url = "github:numtide/nix-filter";
flake-compat = {
url = "github:edolstra/flake-compat";
flake = false;
};
fenix = {
url = "github:nix-community/fenix";
inputs.nixpkgs.follows = "nixpkgs";
};
crane = {
url = "github:ipetkov/crane?ref=master";
inputs.nixpkgs.follows = "nixpkgs";
};
attic.url = "github:zhaofengli/attic?ref=main";
};
outputs = inputs:
let
# Keep sorted
mkScope = pkgs: pkgs.lib.makeScope pkgs.newScope (self: {
craneLib =
(inputs.crane.mkLib pkgs).overrideToolchain self.toolchain;
default = self.callPackage ./nix/pkgs/default {};
inherit inputs;
oci-image = self.callPackage ./nix/pkgs/oci-image {};
book = self.callPackage ./nix/pkgs/book {};
rocksdb =
let
version = "9.1.1";
in
pkgs.rocksdb.overrideAttrs (old: {
inherit version;
src = pkgs.fetchFromGitHub {
owner = "facebook";
repo = "rocksdb";
rev = "v${version}";
hash = "sha256-/Xf0bzNJPclH9IP80QNaABfhj4IAR5LycYET18VFCXc=";
};
});
shell = self.callPackage ./nix/shell.nix {};
# The Rust toolchain to use
toolchain = inputs
.fenix
.packages
.${pkgs.pkgsBuildHost.system}
.fromToolchainFile {
file = ./rust-toolchain.toml;
# See also `rust-toolchain.toml`
sha256 = "sha256-Ngiz76YP4HTY75GGdH2P+APE/DEIx2R/Dn+BwwOyzZU=";
};
});
in
inputs.flake-utils.lib.eachDefaultSystem (system:
let
pkgs = inputs.nixpkgs.legacyPackages.${system};
in
{
packages = {
default = (mkScope pkgs).default;
oci-image = (mkScope pkgs).oci-image;
book = (mkScope pkgs).book;
}
//
builtins.listToAttrs
(builtins.concatLists
(builtins.map
(crossSystem:
let
binaryName = "static-${crossSystem}";
pkgsCrossStatic =
(import inputs.nixpkgs {
inherit system;
crossSystem = {
config = crossSystem;
};
}).pkgsStatic;
in
[
# An output for a statically-linked binary
{
name = binaryName;
value = (mkScope pkgsCrossStatic).default;
}
# An output for an OCI image based on that binary
{
name = "oci-image-${crossSystem}";
value = (mkScope pkgsCrossStatic).oci-image;
}
]
)
[
"x86_64-unknown-linux-musl"
"aarch64-unknown-linux-musl"
]
)
);
devShells.default = (mkScope pkgs).shell;
}
);
}

34
nix/pkgs/book/default.nix Normal file
View file

@ -0,0 +1,34 @@
# Keep sorted
{ default
, inputs
, mdbook
, stdenv
}:
stdenv.mkDerivation {
pname = "${default.pname}-book";
version = default.version;
src = let filter = inputs.nix-filter.lib; in filter {
root = inputs.self;
# Keep sorted
include = [
"book.toml"
"conduit-example.toml"
"debian/README.md"
"docs"
"README.md"
];
};
nativeBuildInputs = [
mdbook
];
buildPhase = ''
mdbook build
mv public $out
'';
}

View file

@ -0,0 +1,100 @@
{ lib
, pkgsBuildHost
, rust
, stdenv
}:
lib.optionalAttrs stdenv.hostPlatform.isStatic {
ROCKSDB_STATIC = "";
}
//
{
CARGO_BUILD_RUSTFLAGS =
lib.concatStringsSep
" "
([]
# This disables PIE for static builds, which isn't great in terms of
# security. Unfortunately, my hand is forced because nixpkgs'
# `libstdc++.a` is built without `-fPIE`, which precludes us from
# leaving PIE enabled.
++ lib.optionals
stdenv.hostPlatform.isStatic
[ "-C" "relocation-model=static" ]
++ lib.optionals
(stdenv.buildPlatform.config != stdenv.hostPlatform.config)
[ "-l" "c" ]
++ lib.optionals
# This check has to match the one [here][0]. We only need to set
# these flags when using a different linker. Don't ask me why, though,
# because I don't know. All I know is it breaks otherwise.
#
# [0]: https://github.com/NixOS/nixpkgs/blob/5cdb38bb16c6d0a38779db14fcc766bc1b2394d6/pkgs/build-support/rust/lib/default.nix#L37-L40
(
# Nixpkgs doesn't check for x86_64 here but we do, because I
# observed a failure building statically for x86_64 without
# including it here. Linkers are weird.
(stdenv.hostPlatform.isAarch64 || stdenv.hostPlatform.isx86_64)
&& stdenv.hostPlatform.isStatic
&& !stdenv.isDarwin
&& !stdenv.cc.bintools.isLLVM
)
[
"-l"
"stdc++"
"-L"
"${stdenv.cc.cc.lib}/${stdenv.hostPlatform.config}/lib"
]
);
}
# What follows is stolen from [here][0]. Its purpose is to properly configure
# compilers and linkers for various stages of the build, and even covers the
# case of build scripts that need native code compiled and run on the build
# platform (I think).
#
# [0]: https://github.com/NixOS/nixpkgs/blob/5cdb38bb16c6d0a38779db14fcc766bc1b2394d6/pkgs/build-support/rust/lib/default.nix#L57-L80
//
(
let
inherit (rust.lib) envVars;
in
lib.optionalAttrs
(stdenv.targetPlatform.rust.rustcTarget
!= stdenv.hostPlatform.rust.rustcTarget)
(
let
inherit (stdenv.targetPlatform.rust) cargoEnvVarTarget;
in
{
"CC_${cargoEnvVarTarget}" = envVars.ccForTarget;
"CXX_${cargoEnvVarTarget}" = envVars.cxxForTarget;
"CARGO_TARGET_${cargoEnvVarTarget}_LINKER" =
envVars.linkerForTarget;
}
)
//
(
let
inherit (stdenv.hostPlatform.rust) cargoEnvVarTarget rustcTarget;
in
{
"CC_${cargoEnvVarTarget}" = envVars.ccForHost;
"CXX_${cargoEnvVarTarget}" = envVars.cxxForHost;
"CARGO_TARGET_${cargoEnvVarTarget}_LINKER" = envVars.linkerForHost;
CARGO_BUILD_TARGET = rustcTarget;
}
)
//
(
let
inherit (stdenv.buildPlatform.rust) cargoEnvVarTarget;
in
{
"CC_${cargoEnvVarTarget}" = envVars.ccForBuild;
"CXX_${cargoEnvVarTarget}" = envVars.cxxForBuild;
"CARGO_TARGET_${cargoEnvVarTarget}_LINKER" = envVars.linkerForBuild;
HOST_CC = "${pkgsBuildHost.stdenv.cc}/bin/cc";
HOST_CXX = "${pkgsBuildHost.stdenv.cc}/bin/c++";
}
)
)

View file

@ -0,0 +1,95 @@
# Dependencies (keep sorted)
{ craneLib
, inputs
, lib
, pkgsBuildHost
, rocksdb
, rust
, stdenv
# Options (keep sorted)
, default-features ? true
, features ? []
, profile ? "release"
}:
let
buildDepsOnlyEnv =
let
rocksdb' = rocksdb.override {
enableJemalloc = builtins.elem "jemalloc" features;
};
in
{
NIX_OUTPATH_USED_AS_RANDOM_SEED = "randomseed"; # https://crane.dev/faq/rebuilds-bindgen.html
ROCKSDB_INCLUDE_DIR = "${rocksdb'}/include";
ROCKSDB_LIB_DIR = "${rocksdb'}/lib";
}
//
(import ./cross-compilation-env.nix {
# Keep sorted
inherit
lib
pkgsBuildHost
rust
stdenv;
});
buildPackageEnv = {
CONDUIT_VERSION_EXTRA = inputs.self.shortRev or inputs.self.dirtyShortRev;
} // buildDepsOnlyEnv;
commonAttrs = {
inherit
(craneLib.crateNameFromCargoToml {
cargoToml = "${inputs.self}/Cargo.toml";
})
pname
version;
src = let filter = inputs.nix-filter.lib; in filter {
root = inputs.self;
# Keep sorted
include = [
"Cargo.lock"
"Cargo.toml"
"src"
];
};
nativeBuildInputs = [
# bindgen needs the build platform's libclang. Apparently due to "splicing
# weirdness", pkgs.rustPlatform.bindgenHook on its own doesn't quite do the
# right thing here.
pkgsBuildHost.rustPlatform.bindgenHook
];
CARGO_PROFILE = profile;
};
in
craneLib.buildPackage ( commonAttrs // {
cargoArtifacts = craneLib.buildDepsOnly (commonAttrs // {
env = buildDepsOnlyEnv;
});
cargoExtraArgs = "--locked "
+ lib.optionalString
(!default-features)
"--no-default-features "
+ lib.optionalString
(features != [])
"--features " + (builtins.concatStringsSep "," features);
# This is redundant with CI
doCheck = false;
env = buildPackageEnv;
passthru = {
env = buildPackageEnv;
};
meta.mainProgram = commonAttrs.pname;
})

View file

@ -0,0 +1,25 @@
# Keep sorted
{ default
, dockerTools
, lib
, tini
}:
dockerTools.buildImage {
name = default.pname;
tag = "next";
copyToRoot = [
dockerTools.caCertificates
];
config = {
# Use the `tini` init system so that signals (e.g. ctrl+c/SIGINT)
# are handled as expected
Entrypoint = [
"${lib.getExe' tini "tini"}"
"--"
];
Cmd = [
"${lib.getExe default}"
];
};
}

61
nix/shell.nix Normal file
View file

@ -0,0 +1,61 @@
# Keep sorted
{ cargo-deb
, default
, engage
, go
, inputs
, jq
, lychee
, mdbook
, mkShell
, olm
, system
, taplo
, toolchain
}:
mkShell {
env = default.env // {
# Rust Analyzer needs to be able to find the path to default crate
# sources, and it can read this environment variable to do so. The
# `rust-src` component is required in order for this to work.
RUST_SRC_PATH = "${toolchain}/lib/rustlib/src/rust/library";
};
# Development tools
nativeBuildInputs = [
# Always use nightly rustfmt because most of its options are unstable
#
# This needs to come before `toolchain` in this list, otherwise
# `$PATH` will have stable rustfmt instead.
inputs.fenix.packages.${system}.latest.rustfmt
# rust itself
toolchain
# CI tests
engage
# format toml files
taplo
# Needed for producing Debian packages
cargo-deb
# Needed for our script for Complement
jq
# Needed for Complement
go
olm
# Needed for our script for Complement
jq
# Needed for finding broken markdown links
lychee
# Useful for editing the book locally
mdbook
] ++ default.nativeBuildInputs ;
}

View file

@ -1 +0,0 @@
1.53

21
rust-toolchain.toml Normal file
View file

@ -0,0 +1,21 @@
# This is the authoritiative configuration of this project's Rust toolchain.
#
# Other files that need upkeep when this changes:
#
# * `Cargo.toml`
# * `flake.nix`
#
# Search in those files for `rust-toolchain.toml` to find the relevant places.
# If you're having trouble making the relevant changes, bug a maintainer.
[toolchain]
channel = "1.79.0"
components = [
# For rust-analyzer
"rust-src",
]
targets = [
"aarch64-unknown-linux-musl",
"x86_64-unknown-linux-gnu",
"x86_64-unknown-linux-musl",
]

View file

@ -1,2 +1,2 @@
imports_granularity = "Crate"
unstable_features = true
imports_granularity="Crate"

View file

@ -1,27 +1,37 @@
use crate::{utils, Error, Result};
use crate::{services, utils, Error, Result};
use bytes::BytesMut;
use ruma::api::{IncomingResponse, OutgoingRequest, SendAccessToken};
use std::{
convert::{TryFrom, TryInto},
fmt::Debug,
mem,
time::Duration,
use ruma::api::{
appservice::Registration, IncomingResponse, MatrixVersion, OutgoingRequest, SendAccessToken,
};
use std::{fmt::Debug, mem, time::Duration};
use tracing::warn;
pub(crate) async fn send_request<T: OutgoingRequest>(
globals: &crate::database::globals::Globals,
registration: serde_yaml::Value,
/// Sends a request to an appservice
///
/// Only returns None if there is no url specified in the appservice registration file
#[tracing::instrument(skip(request))]
pub(crate) async fn send_request<T>(
registration: Registration,
request: T,
) -> Result<T::IncomingResponse>
) -> Result<Option<T::IncomingResponse>>
where
T: Debug,
T: OutgoingRequest + Debug,
{
let destination = registration.get("url").unwrap().as_str().unwrap();
let hs_token = registration.get("hs_token").unwrap().as_str().unwrap();
let destination = match registration.url {
Some(url) => url,
None => {
return Ok(None);
}
};
let hs_token = registration.hs_token.as_str();
let mut http_request = request
.try_into_http_request::<BytesMut>(destination, SendAccessToken::IfRequired(""))
.try_into_http_request::<BytesMut>(
&destination,
SendAccessToken::IfRequired(hs_token),
&[MatrixVersion::V1_0],
)
.unwrap()
.map(|body| body.freeze());
@ -40,17 +50,26 @@ where
);
*http_request.uri_mut() = parts.try_into().expect("our manipulation is always valid");
let mut reqwest_request = reqwest::Request::try_from(http_request)
.expect("all http requests are valid reqwest requests");
let mut reqwest_request = reqwest::Request::try_from(http_request)?;
*reqwest_request.timeout_mut() = Some(Duration::from_secs(30));
let url = reqwest_request.url().clone();
let mut response = globals
.reqwest_client()?
.build()?
let mut response = match services()
.globals
.default_client()
.execute(reqwest_request)
.await?;
.await
{
Ok(r) => r,
Err(e) => {
warn!(
"Could not send request to appservice {:?} at {}: {}",
registration.id, destination, e
);
return Err(e.into());
}
};
// reqwest::Response -> http::Response conversion
let status = response.status();
@ -84,7 +103,8 @@ where
.body(body)
.expect("reqwest body is valid http body"),
);
response.map_err(|_| {
response.map(Some).map_err(|_| {
warn!(
"Appservice returned invalid response bytes {}\n{}",
destination, url

View file

@ -0,0 +1,502 @@
use super::{DEVICE_ID_LENGTH, SESSION_ID_LENGTH, TOKEN_LENGTH};
use crate::{api::client_server, services, utils, Error, Result, Ruma};
use ruma::{
api::client::{
account::{
change_password, deactivate, get_3pids, get_username_availability,
register::{self, LoginType},
request_3pid_management_token_via_email, request_3pid_management_token_via_msisdn,
whoami, ThirdPartyIdRemovalStatus,
},
error::ErrorKind,
uiaa::{AuthFlow, AuthType, UiaaInfo},
},
events::{room::message::RoomMessageEventContent, GlobalAccountDataEventType},
push, UserId,
};
use tracing::{info, warn};
use register::RegistrationKind;
const RANDOM_USER_ID_LENGTH: usize = 10;
/// # `GET /_matrix/client/r0/register/available`
///
/// Checks if a username is valid and available on this server.
///
/// Conditions for returning true:
/// - The user id is not historical
/// - The server name of the user id matches this server
/// - No user or appservice on this server already claimed this username
///
/// Note: This will not reserve the username, so the username might become invalid when trying to register
pub async fn get_register_available_route(
body: Ruma<get_username_availability::v3::Request>,
) -> Result<get_username_availability::v3::Response> {
// Validate user id
let user_id = UserId::parse_with_server_name(
body.username.to_lowercase(),
services().globals.server_name(),
)
.ok()
.filter(|user_id| {
!user_id.is_historical() && user_id.server_name() == services().globals.server_name()
})
.ok_or(Error::BadRequest(
ErrorKind::InvalidUsername,
"Username is invalid.",
))?;
// Check if username is creative enough
if services().users.exists(&user_id)? {
return Err(Error::BadRequest(
ErrorKind::UserInUse,
"Desired user ID is already taken.",
));
}
// TODO add check for appservice namespaces
// If no if check is true we have an username that's available to be used.
Ok(get_username_availability::v3::Response { available: true })
}
/// # `POST /_matrix/client/r0/register`
///
/// Register an account on this homeserver.
///
/// You can use [`GET /_matrix/client/r0/register/available`](fn.get_register_available_route.html)
/// to check if the user id is valid and available.
///
/// - Only works if registration is enabled
/// - If type is guest: ignores all parameters except initial_device_display_name
/// - If sender is not appservice: Requires UIAA (but we only use a dummy stage)
/// - If type is not guest and no username is given: Always fails after UIAA check
/// - Creates a new account and populates it with default account data
/// - If `inhibit_login` is false: Creates a device and returns device id and access_token
pub async fn register_route(body: Ruma<register::v3::Request>) -> Result<register::v3::Response> {
if !services().globals.allow_registration().await && body.appservice_info.is_none() {
return Err(Error::BadRequest(
ErrorKind::forbidden(),
"Registration has been disabled.",
));
}
let is_guest = body.kind == RegistrationKind::Guest;
let user_id = match (&body.username, is_guest) {
(Some(username), false) => {
let proposed_user_id = UserId::parse_with_server_name(
username.to_lowercase(),
services().globals.server_name(),
)
.ok()
.filter(|user_id| {
!user_id.is_historical()
&& user_id.server_name() == services().globals.server_name()
})
.ok_or(Error::BadRequest(
ErrorKind::InvalidUsername,
"Username is invalid.",
))?;
if services().users.exists(&proposed_user_id)? {
return Err(Error::BadRequest(
ErrorKind::UserInUse,
"Desired user ID is already taken.",
));
}
proposed_user_id
}
_ => loop {
let proposed_user_id = UserId::parse_with_server_name(
utils::random_string(RANDOM_USER_ID_LENGTH).to_lowercase(),
services().globals.server_name(),
)
.unwrap();
if !services().users.exists(&proposed_user_id)? {
break proposed_user_id;
}
},
};
if body.body.login_type == Some(LoginType::ApplicationService) {
if let Some(ref info) = body.appservice_info {
if !info.is_user_match(&user_id) {
return Err(Error::BadRequest(
ErrorKind::Exclusive,
"User is not in namespace.",
));
}
} else {
return Err(Error::BadRequest(
ErrorKind::MissingToken,
"Missing appservice token.",
));
}
} else if services().appservice.is_exclusive_user_id(&user_id).await {
return Err(Error::BadRequest(
ErrorKind::Exclusive,
"User id reserved by appservice.",
));
}
// UIAA
let mut uiaainfo;
let skip_auth = if services().globals.config.registration_token.is_some() {
// Registration token required
uiaainfo = UiaaInfo {
flows: vec![AuthFlow {
stages: vec![AuthType::RegistrationToken],
}],
completed: Vec::new(),
params: Default::default(),
session: None,
auth_error: None,
};
body.appservice_info.is_some()
} else {
// No registration token necessary, but clients must still go through the flow
uiaainfo = UiaaInfo {
flows: vec![AuthFlow {
stages: vec![AuthType::Dummy],
}],
completed: Vec::new(),
params: Default::default(),
session: None,
auth_error: None,
};
body.appservice_info.is_some() || is_guest
};
if !skip_auth {
if let Some(auth) = &body.auth {
let (worked, uiaainfo) = services().uiaa.try_auth(
&UserId::parse_with_server_name("", services().globals.server_name())
.expect("we know this is valid"),
"".into(),
auth,
&uiaainfo,
)?;
if !worked {
return Err(Error::Uiaa(uiaainfo));
}
// Success!
} else if let Some(json) = body.json_body {
uiaainfo.session = Some(utils::random_string(SESSION_ID_LENGTH));
services().uiaa.create(
&UserId::parse_with_server_name("", services().globals.server_name())
.expect("we know this is valid"),
"".into(),
&uiaainfo,
&json,
)?;
return Err(Error::Uiaa(uiaainfo));
} else {
return Err(Error::BadRequest(ErrorKind::NotJson, "Not json."));
}
}
let password = if is_guest {
None
} else {
body.password.as_deref()
};
// Create user
services().users.create(&user_id, password)?;
// Default to pretty displayname
let mut displayname = user_id.localpart().to_owned();
// If enabled append lightning bolt to display name (default true)
if services().globals.enable_lightning_bolt() {
displayname.push_str(" ⚡️");
}
services()
.users
.set_displayname(&user_id, Some(displayname.clone()))?;
// Initial account data
services().account_data.update(
None,
&user_id,
GlobalAccountDataEventType::PushRules.to_string().into(),
&serde_json::to_value(ruma::events::push_rules::PushRulesEvent {
content: ruma::events::push_rules::PushRulesEventContent {
global: push::Ruleset::server_default(&user_id),
},
})
.expect("to json always works"),
)?;
// Inhibit login does not work for guests
if !is_guest && body.inhibit_login {
return Ok(register::v3::Response {
access_token: None,
user_id,
device_id: None,
refresh_token: None,
expires_in: None,
});
}
// Generate new device id if the user didn't specify one
let device_id = if is_guest {
None
} else {
body.device_id.clone()
}
.unwrap_or_else(|| utils::random_string(DEVICE_ID_LENGTH).into());
// Generate new token for the device
let token = utils::random_string(TOKEN_LENGTH);
// Create device for this account
services().users.create_device(
&user_id,
&device_id,
&token,
body.initial_device_display_name.clone(),
)?;
info!("New user {} registered on this server.", user_id);
if body.appservice_info.is_none() && !is_guest {
services()
.admin
.send_message(RoomMessageEventContent::notice_plain(format!(
"New user {user_id} registered on this server."
)));
}
// If this is the first real user, grant them admin privileges
// Note: the server user, @conduit:servername, is generated first
if !is_guest {
if let Some(admin_room) = services().admin.get_admin_room()? {
if services()
.rooms
.state_cache
.room_joined_count(&admin_room)?
== Some(1)
{
services()
.admin
.make_user_admin(&user_id, displayname)
.await?;
warn!("Granting {} admin privileges as the first user", user_id);
}
}
}
Ok(register::v3::Response {
access_token: Some(token),
user_id,
device_id: Some(device_id),
refresh_token: None,
expires_in: None,
})
}
/// # `POST /_matrix/client/r0/account/password`
///
/// Changes the password of this account.
///
/// - Requires UIAA to verify user password
/// - Changes the password of the sender user
/// - The password hash is calculated using argon2 with 32 character salt, the plain password is
/// not saved
///
/// If logout_devices is true it does the following for each device except the sender device:
/// - Invalidates access token
/// - Deletes device metadata (device id, device display name, last seen ip, last seen ts)
/// - Forgets to-device events
/// - Triggers device list updates
pub async fn change_password_route(
body: Ruma<change_password::v3::Request>,
) -> Result<change_password::v3::Response> {
let sender_user = body
.sender_user
.as_ref()
// In the future password changes could be performed with UIA with 3PIDs, but we don't support that currently
.ok_or_else(|| Error::BadRequest(ErrorKind::MissingToken, "Missing access token."))?;
let sender_device = body.sender_device.as_ref().expect("user is authenticated");
let mut uiaainfo = UiaaInfo {
flows: vec![AuthFlow {
stages: vec![AuthType::Password],
}],
completed: Vec::new(),
params: Default::default(),
session: None,
auth_error: None,
};
if let Some(auth) = &body.auth {
let (worked, uiaainfo) =
services()
.uiaa
.try_auth(sender_user, sender_device, auth, &uiaainfo)?;
if !worked {
return Err(Error::Uiaa(uiaainfo));
}
// Success!
} else if let Some(json) = body.json_body {
uiaainfo.session = Some(utils::random_string(SESSION_ID_LENGTH));
services()
.uiaa
.create(sender_user, sender_device, &uiaainfo, &json)?;
return Err(Error::Uiaa(uiaainfo));
} else {
return Err(Error::BadRequest(ErrorKind::NotJson, "Not json."));
}
services()
.users
.set_password(sender_user, Some(&body.new_password))?;
if body.logout_devices {
// Logout all devices except the current one
for id in services()
.users
.all_device_ids(sender_user)
.filter_map(|id| id.ok())
.filter(|id| id != sender_device)
{
services().users.remove_device(sender_user, &id)?;
}
}
info!("User {} changed their password.", sender_user);
services()
.admin
.send_message(RoomMessageEventContent::notice_plain(format!(
"User {sender_user} changed their password."
)));
Ok(change_password::v3::Response {})
}
/// # `GET _matrix/client/r0/account/whoami`
///
/// Get user_id of the sender user.
///
/// Note: Also works for Application Services
pub async fn whoami_route(body: Ruma<whoami::v3::Request>) -> Result<whoami::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let device_id = body.sender_device.as_ref().cloned();
Ok(whoami::v3::Response {
user_id: sender_user.clone(),
device_id,
is_guest: services().users.is_deactivated(sender_user)? && body.appservice_info.is_none(),
})
}
/// # `POST /_matrix/client/r0/account/deactivate`
///
/// Deactivate sender user account.
///
/// - Leaves all rooms and rejects all invitations
/// - Invalidates all access tokens
/// - Deletes all device metadata (device id, device display name, last seen ip, last seen ts)
/// - Forgets all to-device events
/// - Triggers device list updates
/// - Removes ability to log in again
pub async fn deactivate_route(
body: Ruma<deactivate::v3::Request>,
) -> Result<deactivate::v3::Response> {
let sender_user = body
.sender_user
.as_ref()
// In the future password changes could be performed with UIA with SSO, but we don't support that currently
.ok_or_else(|| Error::BadRequest(ErrorKind::MissingToken, "Missing access token."))?;
let sender_device = body.sender_device.as_ref().expect("user is authenticated");
let mut uiaainfo = UiaaInfo {
flows: vec![AuthFlow {
stages: vec![AuthType::Password],
}],
completed: Vec::new(),
params: Default::default(),
session: None,
auth_error: None,
};
if let Some(auth) = &body.auth {
let (worked, uiaainfo) =
services()
.uiaa
.try_auth(sender_user, sender_device, auth, &uiaainfo)?;
if !worked {
return Err(Error::Uiaa(uiaainfo));
}
// Success!
} else if let Some(json) = body.json_body {
uiaainfo.session = Some(utils::random_string(SESSION_ID_LENGTH));
services()
.uiaa
.create(sender_user, sender_device, &uiaainfo, &json)?;
return Err(Error::Uiaa(uiaainfo));
} else {
return Err(Error::BadRequest(ErrorKind::NotJson, "Not json."));
}
// Make the user leave all rooms before deactivation
client_server::leave_all_rooms(sender_user).await?;
// Remove devices and mark account as deactivated
services().users.deactivate_account(sender_user)?;
info!("User {} deactivated their account.", sender_user);
services()
.admin
.send_message(RoomMessageEventContent::notice_plain(format!(
"User {sender_user} deactivated their account."
)));
Ok(deactivate::v3::Response {
id_server_unbind_result: ThirdPartyIdRemovalStatus::NoSupport,
})
}
/// # `GET _matrix/client/v3/account/3pid`
///
/// Get a list of third party identifiers associated with this account.
///
/// - Currently always returns empty list
pub async fn third_party_route(
body: Ruma<get_3pids::v3::Request>,
) -> Result<get_3pids::v3::Response> {
let _sender_user = body.sender_user.as_ref().expect("user is authenticated");
Ok(get_3pids::v3::Response::new(Vec::new()))
}
/// # `POST /_matrix/client/v3/account/3pid/email/requestToken`
///
/// "This API should be used to request validation tokens when adding an email address to an account"
///
/// - 403 signals that The homeserver does not allow the third party identifier as a contact option.
pub async fn request_3pid_management_token_via_email_route(
_body: Ruma<request_3pid_management_token_via_email::v3::Request>,
) -> Result<request_3pid_management_token_via_email::v3::Response> {
Err(Error::BadRequest(
ErrorKind::ThreepidDenied,
"Third party identifiers are currently unsupported by this server implementation",
))
}
/// # `POST /_matrix/client/v3/account/3pid/msisdn/requestToken`
///
/// "This API should be used to request validation tokens when adding an phone number to an account"
///
/// - 403 signals that The homeserver does not allow the third party identifier as a contact option.
pub async fn request_3pid_management_token_via_msisdn_route(
_body: Ruma<request_3pid_management_token_via_msisdn::v3::Request>,
) -> Result<request_3pid_management_token_via_msisdn::v3::Response> {
Err(Error::BadRequest(
ErrorKind::ThreepidDenied,
"Third party identifiers are currently unsupported by this server implementation",
))
}

View file

@ -0,0 +1,189 @@
use crate::{services, Error, Result, Ruma};
use rand::seq::SliceRandom;
use ruma::{
api::{
appservice,
client::{
alias::{create_alias, delete_alias, get_alias},
error::ErrorKind,
},
federation,
},
OwnedRoomAliasId,
};
/// # `PUT /_matrix/client/r0/directory/room/{roomAlias}`
///
/// Creates a new room alias on this server.
pub async fn create_alias_route(
body: Ruma<create_alias::v3::Request>,
) -> Result<create_alias::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
if body.room_alias.server_name() != services().globals.server_name() {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Alias is from another server.",
));
}
if let Some(ref info) = body.appservice_info {
if !info.aliases.is_match(body.room_alias.as_str()) {
return Err(Error::BadRequest(
ErrorKind::Exclusive,
"Room alias is not in namespace.",
));
}
} else if services()
.appservice
.is_exclusive_alias(&body.room_alias)
.await
{
return Err(Error::BadRequest(
ErrorKind::Exclusive,
"Room alias reserved by appservice.",
));
}
if services()
.rooms
.alias
.resolve_local_alias(&body.room_alias)?
.is_some()
{
return Err(Error::Conflict("Alias already exists."));
}
services()
.rooms
.alias
.set_alias(&body.room_alias, &body.room_id, sender_user)?;
Ok(create_alias::v3::Response::new())
}
/// # `DELETE /_matrix/client/r0/directory/room/{roomAlias}`
///
/// Deletes a room alias from this server.
///
/// - TODO: Update canonical alias event
pub async fn delete_alias_route(
body: Ruma<delete_alias::v3::Request>,
) -> Result<delete_alias::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
if body.room_alias.server_name() != services().globals.server_name() {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Alias is from another server.",
));
}
if let Some(ref info) = body.appservice_info {
if !info.aliases.is_match(body.room_alias.as_str()) {
return Err(Error::BadRequest(
ErrorKind::Exclusive,
"Room alias is not in namespace.",
));
}
} else if services()
.appservice
.is_exclusive_alias(&body.room_alias)
.await
{
return Err(Error::BadRequest(
ErrorKind::Exclusive,
"Room alias reserved by appservice.",
));
}
services()
.rooms
.alias
.remove_alias(&body.room_alias, sender_user)?;
// TODO: update alt_aliases?
Ok(delete_alias::v3::Response::new())
}
/// # `GET /_matrix/client/r0/directory/room/{roomAlias}`
///
/// Resolve an alias locally or over federation.
///
/// - TODO: Suggest more servers to join via
pub async fn get_alias_route(
body: Ruma<get_alias::v3::Request>,
) -> Result<get_alias::v3::Response> {
get_alias_helper(body.body.room_alias).await
}
pub(crate) async fn get_alias_helper(
room_alias: OwnedRoomAliasId,
) -> Result<get_alias::v3::Response> {
if room_alias.server_name() != services().globals.server_name() {
let response = services()
.sending
.send_federation_request(
room_alias.server_name(),
federation::query::get_room_information::v1::Request {
room_alias: room_alias.to_owned(),
},
)
.await?;
let mut servers = response.servers;
servers.shuffle(&mut rand::thread_rng());
return Ok(get_alias::v3::Response::new(response.room_id, servers));
}
let mut room_id = None;
match services().rooms.alias.resolve_local_alias(&room_alias)? {
Some(r) => room_id = Some(r),
None => {
for appservice in services().appservice.read().await.values() {
if appservice.aliases.is_match(room_alias.as_str())
&& matches!(
services()
.sending
.send_appservice_request(
appservice.registration.clone(),
appservice::query::query_room_alias::v1::Request {
room_alias: room_alias.clone(),
},
)
.await,
Ok(Some(_opt_result))
)
{
room_id = Some(
services()
.rooms
.alias
.resolve_local_alias(&room_alias)?
.ok_or_else(|| {
Error::bad_config("Appservice lied to us. Room does not exist.")
})?,
);
break;
}
}
}
};
let room_id = match room_id {
Some(room_id) => room_id,
None => {
return Err(Error::BadRequest(
ErrorKind::NotFound,
"Room with alias not found.",
))
}
};
Ok(get_alias::v3::Response::new(
room_id,
vec![services().globals.server_name().to_owned()],
))
}

View file

@ -0,0 +1,362 @@
use crate::{services, Error, Result, Ruma};
use ruma::api::client::{
backup::{
add_backup_keys, add_backup_keys_for_room, add_backup_keys_for_session,
create_backup_version, delete_backup_keys, delete_backup_keys_for_room,
delete_backup_keys_for_session, delete_backup_version, get_backup_info, get_backup_keys,
get_backup_keys_for_room, get_backup_keys_for_session, get_latest_backup_info,
update_backup_version,
},
error::ErrorKind,
};
/// # `POST /_matrix/client/r0/room_keys/version`
///
/// Creates a new backup.
pub async fn create_backup_version_route(
body: Ruma<create_backup_version::v3::Request>,
) -> Result<create_backup_version::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let version = services()
.key_backups
.create_backup(sender_user, &body.algorithm)?;
Ok(create_backup_version::v3::Response { version })
}
/// # `PUT /_matrix/client/r0/room_keys/version/{version}`
///
/// Update information about an existing backup. Only `auth_data` can be modified.
pub async fn update_backup_version_route(
body: Ruma<update_backup_version::v3::Request>,
) -> Result<update_backup_version::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
services()
.key_backups
.update_backup(sender_user, &body.version, &body.algorithm)?;
Ok(update_backup_version::v3::Response {})
}
/// # `GET /_matrix/client/r0/room_keys/version`
///
/// Get information about the latest backup version.
pub async fn get_latest_backup_info_route(
body: Ruma<get_latest_backup_info::v3::Request>,
) -> Result<get_latest_backup_info::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let (version, algorithm) = services()
.key_backups
.get_latest_backup(sender_user)?
.ok_or(Error::BadRequest(
ErrorKind::NotFound,
"Key backup does not exist.",
))?;
Ok(get_latest_backup_info::v3::Response {
algorithm,
count: (services().key_backups.count_keys(sender_user, &version)? as u32).into(),
etag: services().key_backups.get_etag(sender_user, &version)?,
version,
})
}
/// # `GET /_matrix/client/r0/room_keys/version`
///
/// Get information about an existing backup.
pub async fn get_backup_info_route(
body: Ruma<get_backup_info::v3::Request>,
) -> Result<get_backup_info::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let algorithm = services()
.key_backups
.get_backup(sender_user, &body.version)?
.ok_or(Error::BadRequest(
ErrorKind::NotFound,
"Key backup does not exist.",
))?;
Ok(get_backup_info::v3::Response {
algorithm,
count: (services()
.key_backups
.count_keys(sender_user, &body.version)? as u32)
.into(),
etag: services()
.key_backups
.get_etag(sender_user, &body.version)?,
version: body.version.to_owned(),
})
}
/// # `DELETE /_matrix/client/r0/room_keys/version/{version}`
///
/// Delete an existing key backup.
///
/// - Deletes both information about the backup, as well as all key data related to the backup
pub async fn delete_backup_version_route(
body: Ruma<delete_backup_version::v3::Request>,
) -> Result<delete_backup_version::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
services()
.key_backups
.delete_backup(sender_user, &body.version)?;
Ok(delete_backup_version::v3::Response {})
}
/// # `PUT /_matrix/client/r0/room_keys/keys`
///
/// Add the received backup keys to the database.
///
/// - Only manipulating the most recently created version of the backup is allowed
/// - Adds the keys to the backup
/// - Returns the new number of keys in this backup and the etag
pub async fn add_backup_keys_route(
body: Ruma<add_backup_keys::v3::Request>,
) -> Result<add_backup_keys::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
if Some(&body.version)
!= services()
.key_backups
.get_latest_backup_version(sender_user)?
.as_ref()
{
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"You may only manipulate the most recently created version of the backup.",
));
}
for (room_id, room) in &body.rooms {
for (session_id, key_data) in &room.sessions {
services().key_backups.add_key(
sender_user,
&body.version,
room_id,
session_id,
key_data,
)?
}
}
Ok(add_backup_keys::v3::Response {
count: (services()
.key_backups
.count_keys(sender_user, &body.version)? as u32)
.into(),
etag: services()
.key_backups
.get_etag(sender_user, &body.version)?,
})
}
/// # `PUT /_matrix/client/r0/room_keys/keys/{roomId}`
///
/// Add the received backup keys to the database.
///
/// - Only manipulating the most recently created version of the backup is allowed
/// - Adds the keys to the backup
/// - Returns the new number of keys in this backup and the etag
pub async fn add_backup_keys_for_room_route(
body: Ruma<add_backup_keys_for_room::v3::Request>,
) -> Result<add_backup_keys_for_room::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
if Some(&body.version)
!= services()
.key_backups
.get_latest_backup_version(sender_user)?
.as_ref()
{
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"You may only manipulate the most recently created version of the backup.",
));
}
for (session_id, key_data) in &body.sessions {
services().key_backups.add_key(
sender_user,
&body.version,
&body.room_id,
session_id,
key_data,
)?
}
Ok(add_backup_keys_for_room::v3::Response {
count: (services()
.key_backups
.count_keys(sender_user, &body.version)? as u32)
.into(),
etag: services()
.key_backups
.get_etag(sender_user, &body.version)?,
})
}
/// # `PUT /_matrix/client/r0/room_keys/keys/{roomId}/{sessionId}`
///
/// Add the received backup key to the database.
///
/// - Only manipulating the most recently created version of the backup is allowed
/// - Adds the keys to the backup
/// - Returns the new number of keys in this backup and the etag
pub async fn add_backup_keys_for_session_route(
body: Ruma<add_backup_keys_for_session::v3::Request>,
) -> Result<add_backup_keys_for_session::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
if Some(&body.version)
!= services()
.key_backups
.get_latest_backup_version(sender_user)?
.as_ref()
{
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"You may only manipulate the most recently created version of the backup.",
));
}
services().key_backups.add_key(
sender_user,
&body.version,
&body.room_id,
&body.session_id,
&body.session_data,
)?;
Ok(add_backup_keys_for_session::v3::Response {
count: (services()
.key_backups
.count_keys(sender_user, &body.version)? as u32)
.into(),
etag: services()
.key_backups
.get_etag(sender_user, &body.version)?,
})
}
/// # `GET /_matrix/client/r0/room_keys/keys`
///
/// Retrieves all keys from the backup.
pub async fn get_backup_keys_route(
body: Ruma<get_backup_keys::v3::Request>,
) -> Result<get_backup_keys::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let rooms = services().key_backups.get_all(sender_user, &body.version)?;
Ok(get_backup_keys::v3::Response { rooms })
}
/// # `GET /_matrix/client/r0/room_keys/keys/{roomId}`
///
/// Retrieves all keys from the backup for a given room.
pub async fn get_backup_keys_for_room_route(
body: Ruma<get_backup_keys_for_room::v3::Request>,
) -> Result<get_backup_keys_for_room::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let sessions = services()
.key_backups
.get_room(sender_user, &body.version, &body.room_id)?;
Ok(get_backup_keys_for_room::v3::Response { sessions })
}
/// # `GET /_matrix/client/r0/room_keys/keys/{roomId}/{sessionId}`
///
/// Retrieves a key from the backup.
pub async fn get_backup_keys_for_session_route(
body: Ruma<get_backup_keys_for_session::v3::Request>,
) -> Result<get_backup_keys_for_session::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let key_data = services()
.key_backups
.get_session(sender_user, &body.version, &body.room_id, &body.session_id)?
.ok_or(Error::BadRequest(
ErrorKind::NotFound,
"Backup key not found for this user's session.",
))?;
Ok(get_backup_keys_for_session::v3::Response { key_data })
}
/// # `DELETE /_matrix/client/r0/room_keys/keys`
///
/// Delete the keys from the backup.
pub async fn delete_backup_keys_route(
body: Ruma<delete_backup_keys::v3::Request>,
) -> Result<delete_backup_keys::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
services()
.key_backups
.delete_all_keys(sender_user, &body.version)?;
Ok(delete_backup_keys::v3::Response {
count: (services()
.key_backups
.count_keys(sender_user, &body.version)? as u32)
.into(),
etag: services()
.key_backups
.get_etag(sender_user, &body.version)?,
})
}
/// # `DELETE /_matrix/client/r0/room_keys/keys/{roomId}`
///
/// Delete the keys from the backup for a given room.
pub async fn delete_backup_keys_for_room_route(
body: Ruma<delete_backup_keys_for_room::v3::Request>,
) -> Result<delete_backup_keys_for_room::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
services()
.key_backups
.delete_room_keys(sender_user, &body.version, &body.room_id)?;
Ok(delete_backup_keys_for_room::v3::Response {
count: (services()
.key_backups
.count_keys(sender_user, &body.version)? as u32)
.into(),
etag: services()
.key_backups
.get_etag(sender_user, &body.version)?,
})
}
/// # `DELETE /_matrix/client/r0/room_keys/keys/{roomId}/{sessionId}`
///
/// Delete a key from the backup.
pub async fn delete_backup_keys_for_session_route(
body: Ruma<delete_backup_keys_for_session::v3::Request>,
) -> Result<delete_backup_keys_for_session::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
services().key_backups.delete_room_key(
sender_user,
&body.version,
&body.room_id,
&body.session_id,
)?;
Ok(delete_backup_keys_for_session::v3::Response {
count: (services()
.key_backups
.count_keys(sender_user, &body.version)? as u32)
.into(),
etag: services()
.key_backups
.get_etag(sender_user, &body.version)?,
})
}

View file

@ -0,0 +1,28 @@
use crate::{services, Result, Ruma};
use ruma::api::client::discovery::get_capabilities::{
self, Capabilities, RoomVersionStability, RoomVersionsCapability,
};
use std::collections::BTreeMap;
/// # `GET /_matrix/client/r0/capabilities`
///
/// Get information on the supported feature set and other relevent capabilities of this server.
pub async fn get_capabilities_route(
_body: Ruma<get_capabilities::v3::Request>,
) -> Result<get_capabilities::v3::Response> {
let mut available = BTreeMap::new();
for room_version in &services().globals.unstable_room_versions {
available.insert(room_version.clone(), RoomVersionStability::Unstable);
}
for room_version in &services().globals.stable_room_versions {
available.insert(room_version.clone(), RoomVersionStability::Stable);
}
let mut capabilities = Capabilities::new();
capabilities.room_versions = RoomVersionsCapability {
default: services().globals.default_room_version(),
available,
};
Ok(get_capabilities::v3::Response { capabilities })
}

View file

@ -1,11 +1,11 @@
use crate::{database::DatabaseGuard, ConduitResult, Error, Ruma};
use crate::{services, Error, Result, Ruma};
use ruma::{
api::client::{
error::ErrorKind,
r0::config::{
config::{
get_global_account_data, get_room_account_data, set_global_account_data,
set_room_account_data,
},
error::ErrorKind,
},
events::{AnyGlobalAccountDataEventContent, AnyRoomAccountDataEventContent},
serde::Raw,
@ -13,29 +13,20 @@ use ruma::{
use serde::Deserialize;
use serde_json::{json, value::RawValue as RawJsonValue};
#[cfg(feature = "conduit_bin")]
use rocket::{get, put};
/// # `PUT /_matrix/client/r0/user/{userId}/account_data/{type}`
///
/// Sets some account data for the sender user.
#[cfg_attr(
feature = "conduit_bin",
put("/_matrix/client/r0/user/<_>/account_data/<_>", data = "<body>")
)]
#[tracing::instrument(skip(db, body))]
pub async fn set_global_account_data_route(
db: DatabaseGuard,
body: Ruma<set_global_account_data::Request<'_>>,
) -> ConduitResult<set_global_account_data::Response> {
body: Ruma<set_global_account_data::v3::Request>,
) -> Result<set_global_account_data::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let data: serde_json::Value = serde_json::from_str(body.data.get())
let data: serde_json::Value = serde_json::from_str(body.data.json().get())
.map_err(|_| Error::BadRequest(ErrorKind::BadJson, "Data is invalid."))?;
let event_type = body.event_type.to_string();
db.account_data.update(
services().account_data.update(
None,
sender_user,
event_type.clone().into(),
@ -43,37 +34,25 @@ pub async fn set_global_account_data_route(
"type": event_type,
"content": data,
}),
&db.globals,
)?;
db.flush()?;
Ok(set_global_account_data::Response {}.into())
Ok(set_global_account_data::v3::Response {})
}
/// # `PUT /_matrix/client/r0/user/{userId}/rooms/{roomId}/account_data/{type}`
///
/// Sets some room account data for the sender user.
#[cfg_attr(
feature = "conduit_bin",
put(
"/_matrix/client/r0/user/<_>/rooms/<_>/account_data/<_>",
data = "<body>"
)
)]
#[tracing::instrument(skip(db, body))]
pub async fn set_room_account_data_route(
db: DatabaseGuard,
body: Ruma<set_room_account_data::Request<'_>>,
) -> ConduitResult<set_room_account_data::Response> {
body: Ruma<set_room_account_data::v3::Request>,
) -> Result<set_room_account_data::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let data: serde_json::Value = serde_json::from_str(body.data.get())
let data: serde_json::Value = serde_json::from_str(body.data.json().get())
.map_err(|_| Error::BadRequest(ErrorKind::BadJson, "Data is invalid."))?;
let event_type = body.event_type.to_string();
db.account_data.update(
services().account_data.update(
Some(&body.room_id),
sender_user,
event_type.clone().into(),
@ -81,71 +60,49 @@ pub async fn set_room_account_data_route(
"type": event_type,
"content": data,
}),
&db.globals,
)?;
db.flush()?;
Ok(set_room_account_data::Response {}.into())
Ok(set_room_account_data::v3::Response {})
}
/// # `GET /_matrix/client/r0/user/{userId}/account_data/{type}`
///
/// Gets some account data for the sender user.
#[cfg_attr(
feature = "conduit_bin",
get("/_matrix/client/r0/user/<_>/account_data/<_>", data = "<body>")
)]
#[tracing::instrument(skip(db, body))]
pub async fn get_global_account_data_route(
db: DatabaseGuard,
body: Ruma<get_global_account_data::Request<'_>>,
) -> ConduitResult<get_global_account_data::Response> {
body: Ruma<get_global_account_data::v3::Request>,
) -> Result<get_global_account_data::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let event: Box<RawJsonValue> = db
let event: Box<RawJsonValue> = services()
.account_data
.get(None, sender_user, body.event_type.clone().into())?
.get(None, sender_user, body.event_type.to_string().into())?
.ok_or(Error::BadRequest(ErrorKind::NotFound, "Data not found."))?;
let account_data = serde_json::from_str::<ExtractGlobalEventContent>(event.get())
.map_err(|_| Error::bad_database("Invalid account data event in db."))?
.content;
Ok(get_global_account_data::Response { account_data }.into())
Ok(get_global_account_data::v3::Response { account_data })
}
/// # `GET /_matrix/client/r0/user/{userId}/rooms/{roomId}/account_data/{type}`
///
/// Gets some room account data for the sender user.
#[cfg_attr(
feature = "conduit_bin",
get(
"/_matrix/client/r0/user/<_>/rooms/<_>/account_data/<_>",
data = "<body>"
)
)]
#[tracing::instrument(skip(db, body))]
pub async fn get_room_account_data_route(
db: DatabaseGuard,
body: Ruma<get_room_account_data::Request<'_>>,
) -> ConduitResult<get_room_account_data::Response> {
body: Ruma<get_room_account_data::v3::Request>,
) -> Result<get_room_account_data::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let event: Box<RawJsonValue> = db
let event: Box<RawJsonValue> = services()
.account_data
.get(
Some(&body.room_id),
sender_user,
body.event_type.clone().into(),
)?
.get(Some(&body.room_id), sender_user, body.event_type.clone())?
.ok_or(Error::BadRequest(ErrorKind::NotFound, "Data not found."))?;
let account_data = serde_json::from_str::<ExtractRoomEventContent>(event.get())
.map_err(|_| Error::bad_database("Invalid account data event in db."))?
.content;
Ok(get_room_account_data::Response { account_data }.into())
Ok(get_room_account_data::v3::Response { account_data })
}
#[derive(Deserialize)]

View file

@ -0,0 +1,209 @@
use crate::{services, Error, Result, Ruma};
use ruma::{
api::client::{context::get_context, error::ErrorKind, filter::LazyLoadOptions},
events::StateEventType,
};
use std::collections::HashSet;
use tracing::error;
/// # `GET /_matrix/client/r0/rooms/{roomId}/context`
///
/// Allows loading room history around an event.
///
/// - Only works if the user is joined (TODO: always allow, but only show events if the user was
/// joined, depending on history_visibility)
pub async fn get_context_route(
body: Ruma<get_context::v3::Request>,
) -> Result<get_context::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let sender_device = body.sender_device.as_ref().expect("user is authenticated");
let (lazy_load_enabled, lazy_load_send_redundant) = match &body.filter.lazy_load_options {
LazyLoadOptions::Enabled {
include_redundant_members,
} => (true, *include_redundant_members),
_ => (false, false),
};
let mut lazy_loaded = HashSet::new();
let base_token = services()
.rooms
.timeline
.get_pdu_count(&body.event_id)?
.ok_or(Error::BadRequest(
ErrorKind::NotFound,
"Base event id not found.",
))?;
let base_event =
services()
.rooms
.timeline
.get_pdu(&body.event_id)?
.ok_or(Error::BadRequest(
ErrorKind::NotFound,
"Base event not found.",
))?;
let room_id = base_event.room_id.clone();
if !services()
.rooms
.state_accessor
.user_can_see_event(sender_user, &room_id, &body.event_id)?
{
return Err(Error::BadRequest(
ErrorKind::forbidden(),
"You don't have permission to view this event.",
));
}
if !services().rooms.lazy_loading.lazy_load_was_sent_before(
sender_user,
sender_device,
&room_id,
&base_event.sender,
)? || lazy_load_send_redundant
{
lazy_loaded.insert(base_event.sender.as_str().to_owned());
}
// Use limit with maximum 100
let limit = u64::from(body.limit).min(100) as usize;
let base_event = base_event.to_room_event();
let events_before: Vec<_> = services()
.rooms
.timeline
.pdus_until(sender_user, &room_id, base_token)?
.take(limit / 2)
.filter_map(|r| r.ok()) // Remove buggy events
.filter(|(_, pdu)| {
services()
.rooms
.state_accessor
.user_can_see_event(sender_user, &room_id, &pdu.event_id)
.unwrap_or(false)
})
.collect();
for (_, event) in &events_before {
if !services().rooms.lazy_loading.lazy_load_was_sent_before(
sender_user,
sender_device,
&room_id,
&event.sender,
)? || lazy_load_send_redundant
{
lazy_loaded.insert(event.sender.as_str().to_owned());
}
}
let start_token = events_before
.last()
.map(|(count, _)| count.stringify())
.unwrap_or_else(|| base_token.stringify());
let events_before: Vec<_> = events_before
.into_iter()
.map(|(_, pdu)| pdu.to_room_event())
.collect();
let events_after: Vec<_> = services()
.rooms
.timeline
.pdus_after(sender_user, &room_id, base_token)?
.take(limit / 2)
.filter_map(|r| r.ok()) // Remove buggy events
.filter(|(_, pdu)| {
services()
.rooms
.state_accessor
.user_can_see_event(sender_user, &room_id, &pdu.event_id)
.unwrap_or(false)
})
.collect();
for (_, event) in &events_after {
if !services().rooms.lazy_loading.lazy_load_was_sent_before(
sender_user,
sender_device,
&room_id,
&event.sender,
)? || lazy_load_send_redundant
{
lazy_loaded.insert(event.sender.as_str().to_owned());
}
}
let shortstatehash = match services().rooms.state_accessor.pdu_shortstatehash(
events_after
.last()
.map_or(&*body.event_id, |(_, e)| &*e.event_id),
)? {
Some(s) => s,
None => services()
.rooms
.state
.get_room_shortstatehash(&room_id)?
.expect("All rooms have state"),
};
let state_ids = services()
.rooms
.state_accessor
.state_full_ids(shortstatehash)
.await?;
let end_token = events_after
.last()
.map(|(count, _)| count.stringify())
.unwrap_or_else(|| base_token.stringify());
let events_after: Vec<_> = events_after
.into_iter()
.map(|(_, pdu)| pdu.to_room_event())
.collect();
let mut state = Vec::new();
for (shortstatekey, id) in state_ids {
let (event_type, state_key) = services()
.rooms
.short
.get_statekey_from_short(shortstatekey)?;
if event_type != StateEventType::RoomMember {
let pdu = match services().rooms.timeline.get_pdu(&id)? {
Some(pdu) => pdu,
None => {
error!("Pdu in state not found: {}", id);
continue;
}
};
state.push(pdu.to_state_event());
} else if !lazy_load_enabled || lazy_loaded.contains(&state_key) {
let pdu = match services().rooms.timeline.get_pdu(&id)? {
Some(pdu) => pdu,
None => {
error!("Pdu in state not found: {}", id);
continue;
}
};
state.push(pdu.to_state_event());
}
}
let resp = get_context::v3::Response {
start: Some(start_token),
end: Some(end_token),
events_before,
event: Some(base_event),
events_after,
state,
};
Ok(resp)
}

View file

@ -1,91 +1,68 @@
use crate::{database::DatabaseGuard, utils, ConduitResult, Error, Ruma};
use crate::{services, utils, Error, Result, Ruma};
use ruma::api::client::{
device::{self, delete_device, delete_devices, get_device, get_devices, update_device},
error::ErrorKind,
r0::{
device::{self, delete_device, delete_devices, get_device, get_devices, update_device},
uiaa::{AuthFlow, AuthType, UiaaInfo},
},
uiaa::{AuthFlow, AuthType, UiaaInfo},
};
use super::SESSION_ID_LENGTH;
#[cfg(feature = "conduit_bin")]
use rocket::{delete, get, post, put};
/// # `GET /_matrix/client/r0/devices`
///
/// Get metadata on all devices of the sender user.
#[cfg_attr(
feature = "conduit_bin",
get("/_matrix/client/r0/devices", data = "<body>")
)]
#[tracing::instrument(skip(db, body))]
pub async fn get_devices_route(
db: DatabaseGuard,
body: Ruma<get_devices::Request>,
) -> ConduitResult<get_devices::Response> {
body: Ruma<get_devices::v3::Request>,
) -> Result<get_devices::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let devices: Vec<device::Device> = db
let devices: Vec<device::Device> = services()
.users
.all_devices_metadata(sender_user)
.filter_map(|r| r.ok()) // Filter out buggy devices
.collect();
Ok(get_devices::Response { devices }.into())
Ok(get_devices::v3::Response { devices })
}
/// # `GET /_matrix/client/r0/devices/{deviceId}`
///
/// Get metadata on a single device of the sender user.
#[cfg_attr(
feature = "conduit_bin",
get("/_matrix/client/r0/devices/<_>", data = "<body>")
)]
#[tracing::instrument(skip(db, body))]
pub async fn get_device_route(
db: DatabaseGuard,
body: Ruma<get_device::Request<'_>>,
) -> ConduitResult<get_device::Response> {
body: Ruma<get_device::v3::Request>,
) -> Result<get_device::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let device = db
let device = services()
.users
.get_device_metadata(sender_user, &body.body.device_id)?
.ok_or(Error::BadRequest(ErrorKind::NotFound, "Device not found."))?;
Ok(get_device::Response { device }.into())
Ok(get_device::v3::Response { device })
}
/// # `PUT /_matrix/client/r0/devices/{deviceId}`
///
/// Updates the metadata on a given device of the sender user.
#[cfg_attr(
feature = "conduit_bin",
put("/_matrix/client/r0/devices/<_>", data = "<body>")
)]
#[tracing::instrument(skip(db, body))]
pub async fn update_device_route(
db: DatabaseGuard,
body: Ruma<update_device::Request<'_>>,
) -> ConduitResult<update_device::Response> {
body: Ruma<update_device::v3::Request>,
) -> Result<update_device::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let mut device = db
let mut device = services()
.users
.get_device_metadata(sender_user, &body.device_id)?
.ok_or(Error::BadRequest(ErrorKind::NotFound, "Device not found."))?;
device.display_name = body.display_name.clone();
device.display_name.clone_from(&body.display_name);
db.users
services()
.users
.update_device_metadata(sender_user, &body.device_id, &device)?;
db.flush()?;
Ok(update_device::Response {}.into())
Ok(update_device::v3::Response {})
}
/// # `PUT /_matrix/client/r0/devices/{deviceId}`
/// # `DELETE /_matrix/client/r0/devices/{deviceId}`
///
/// Deletes the given device.
///
@ -94,15 +71,9 @@ pub async fn update_device_route(
/// - Deletes device metadata (device id, device display name, last seen ip, last seen ts)
/// - Forgets to-device events
/// - Triggers device list updates
#[cfg_attr(
feature = "conduit_bin",
delete("/_matrix/client/r0/devices/<_>", data = "<body>")
)]
#[tracing::instrument(skip(db, body))]
pub async fn delete_device_route(
db: DatabaseGuard,
body: Ruma<delete_device::Request<'_>>,
) -> ConduitResult<delete_device::Response> {
body: Ruma<delete_device::v3::Request>,
) -> Result<delete_device::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let sender_device = body.sender_device.as_ref().expect("user is authenticated");
@ -118,32 +89,29 @@ pub async fn delete_device_route(
};
if let Some(auth) = &body.auth {
let (worked, uiaainfo) = db.uiaa.try_auth(
sender_user,
sender_device,
auth,
&uiaainfo,
&db.users,
&db.globals,
)?;
let (worked, uiaainfo) =
services()
.uiaa
.try_auth(sender_user, sender_device, auth, &uiaainfo)?;
if !worked {
return Err(Error::Uiaa(uiaainfo));
}
// Success!
} else if let Some(json) = body.json_body {
uiaainfo.session = Some(utils::random_string(SESSION_ID_LENGTH));
db.uiaa
services()
.uiaa
.create(sender_user, sender_device, &uiaainfo, &json)?;
return Err(Error::Uiaa(uiaainfo));
} else {
return Err(Error::BadRequest(ErrorKind::NotJson, "Not json."));
}
db.users.remove_device(sender_user, &body.device_id)?;
services()
.users
.remove_device(sender_user, &body.device_id)?;
db.flush()?;
Ok(delete_device::Response {}.into())
Ok(delete_device::v3::Response {})
}
/// # `PUT /_matrix/client/r0/devices/{deviceId}`
@ -157,15 +125,9 @@ pub async fn delete_device_route(
/// - Deletes device metadata (device id, device display name, last seen ip, last seen ts)
/// - Forgets to-device events
/// - Triggers device list updates
#[cfg_attr(
feature = "conduit_bin",
post("/_matrix/client/r0/delete_devices", data = "<body>")
)]
#[tracing::instrument(skip(db, body))]
pub async fn delete_devices_route(
db: DatabaseGuard,
body: Ruma<delete_devices::Request<'_>>,
) -> ConduitResult<delete_devices::Response> {
body: Ruma<delete_devices::v3::Request>,
) -> Result<delete_devices::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let sender_device = body.sender_device.as_ref().expect("user is authenticated");
@ -181,21 +143,18 @@ pub async fn delete_devices_route(
};
if let Some(auth) = &body.auth {
let (worked, uiaainfo) = db.uiaa.try_auth(
sender_user,
sender_device,
auth,
&uiaainfo,
&db.users,
&db.globals,
)?;
let (worked, uiaainfo) =
services()
.uiaa
.try_auth(sender_user, sender_device, auth, &uiaainfo)?;
if !worked {
return Err(Error::Uiaa(uiaainfo));
}
// Success!
} else if let Some(json) = body.json_body {
uiaainfo.session = Some(utils::random_string(SESSION_ID_LENGTH));
db.uiaa
services()
.uiaa
.create(sender_user, sender_device, &uiaainfo, &json)?;
return Err(Error::Uiaa(uiaainfo));
} else {
@ -203,10 +162,8 @@ pub async fn delete_devices_route(
}
for device_id in &body.devices {
db.users.remove_device(sender_user, device_id)?
services().users.remove_device(sender_user, device_id)?
}
db.flush()?;
Ok(delete_devices::Response {}.into())
Ok(delete_devices::v3::Response {})
}

View file

@ -1,55 +1,42 @@
use std::convert::TryInto;
use crate::{database::DatabaseGuard, ConduitResult, Database, Error, Result, Ruma};
use crate::{services, Error, Result, Ruma};
use ruma::{
api::{
client::{
error::ErrorKind,
r0::{
directory::{
get_public_rooms, get_public_rooms_filtered, get_room_visibility,
set_room_visibility,
},
room,
directory::{
get_public_rooms, get_public_rooms_filtered, get_room_visibility,
set_room_visibility,
},
error::ErrorKind,
room,
},
federation,
},
directory::{Filter, IncomingFilter, IncomingRoomNetwork, PublicRoomsChunk, RoomNetwork},
directory::{Filter, PublicRoomJoinRule, PublicRoomsChunk, RoomNetwork},
events::{
room::{
avatar::RoomAvatarEventContent,
canonical_alias::RoomCanonicalAliasEventContent,
create::RoomCreateEventContent,
guest_access::{GuestAccess, RoomGuestAccessEventContent},
history_visibility::{HistoryVisibility, RoomHistoryVisibilityEventContent},
name::RoomNameEventContent,
join_rules::{JoinRule, RoomJoinRulesEventContent},
topic::RoomTopicEventContent,
},
EventType,
StateEventType,
},
ServerName, UInt,
};
use tracing::{info, warn};
#[cfg(feature = "conduit_bin")]
use rocket::{get, post, put};
use tracing::{error, info, warn};
/// # `POST /_matrix/client/r0/publicRooms`
///
/// Lists the public rooms on this server.
///
/// - Rooms are ordered by the number of joined members
#[cfg_attr(
feature = "conduit_bin",
post("/_matrix/client/r0/publicRooms", data = "<body>")
)]
#[tracing::instrument(skip(db, body))]
pub async fn get_public_rooms_filtered_route(
db: DatabaseGuard,
body: Ruma<get_public_rooms_filtered::Request<'_>>,
) -> ConduitResult<get_public_rooms_filtered::Response> {
body: Ruma<get_public_rooms_filtered::v3::Request>,
) -> Result<get_public_rooms_filtered::v3::Response> {
get_public_rooms_filtered_helper(
&db,
body.server.as_deref(),
body.limit,
body.since.as_deref(),
@ -64,33 +51,24 @@ pub async fn get_public_rooms_filtered_route(
/// Lists the public rooms on this server.
///
/// - Rooms are ordered by the number of joined members
#[cfg_attr(
feature = "conduit_bin",
get("/_matrix/client/r0/publicRooms", data = "<body>")
)]
#[tracing::instrument(skip(db, body))]
pub async fn get_public_rooms_route(
db: DatabaseGuard,
body: Ruma<get_public_rooms::Request<'_>>,
) -> ConduitResult<get_public_rooms::Response> {
body: Ruma<get_public_rooms::v3::Request>,
) -> Result<get_public_rooms::v3::Response> {
let response = get_public_rooms_filtered_helper(
&db,
body.server.as_deref(),
body.limit,
body.since.as_deref(),
&IncomingFilter::default(),
&IncomingRoomNetwork::Matrix,
&Filter::default(),
&RoomNetwork::Matrix,
)
.await?
.0;
.await?;
Ok(get_public_rooms::Response {
Ok(get_public_rooms::v3::Response {
chunk: response.chunk,
prev_batch: response.prev_batch,
next_batch: response.next_batch,
total_room_count_estimate: response.total_room_count_estimate,
}
.into())
})
}
/// # `PUT /_matrix/client/r0/directory/list/room/{roomId}`
@ -98,23 +76,22 @@ pub async fn get_public_rooms_route(
/// Sets the visibility of a given room in the room directory.
///
/// - TODO: Access control checks
#[cfg_attr(
feature = "conduit_bin",
put("/_matrix/client/r0/directory/list/room/<_>", data = "<body>")
)]
#[tracing::instrument(skip(db, body))]
pub async fn set_room_visibility_route(
db: DatabaseGuard,
body: Ruma<set_room_visibility::Request<'_>>,
) -> ConduitResult<set_room_visibility::Response> {
body: Ruma<set_room_visibility::v3::Request>,
) -> Result<set_room_visibility::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
if !services().rooms.metadata.exists(&body.room_id)? {
// Return 404 if the room doesn't exist
return Err(Error::BadRequest(ErrorKind::NotFound, "Room not found"));
}
match &body.visibility {
room::Visibility::Public => {
db.rooms.set_public(&body.room_id, true)?;
services().rooms.directory.set_public(&body.room_id)?;
info!("{} made {} public", sender_user, body.room_id);
}
room::Visibility::Private => db.rooms.set_public(&body.room_id, false)?,
room::Visibility::Private => services().rooms.directory.set_not_public(&body.room_id)?,
_ => {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
@ -123,78 +100,61 @@ pub async fn set_room_visibility_route(
}
}
db.flush()?;
Ok(set_room_visibility::Response {}.into())
Ok(set_room_visibility::v3::Response {})
}
/// # `GET /_matrix/client/r0/directory/list/room/{roomId}`
///
/// Gets the visibility of a given room in the room directory.
#[cfg_attr(
feature = "conduit_bin",
get("/_matrix/client/r0/directory/list/room/<_>", data = "<body>")
)]
#[tracing::instrument(skip(db, body))]
pub async fn get_room_visibility_route(
db: DatabaseGuard,
body: Ruma<get_room_visibility::Request<'_>>,
) -> ConduitResult<get_room_visibility::Response> {
Ok(get_room_visibility::Response {
visibility: if db.rooms.is_public_room(&body.room_id)? {
body: Ruma<get_room_visibility::v3::Request>,
) -> Result<get_room_visibility::v3::Response> {
if !services().rooms.metadata.exists(&body.room_id)? {
// Return 404 if the room doesn't exist
return Err(Error::BadRequest(ErrorKind::NotFound, "Room not found"));
}
Ok(get_room_visibility::v3::Response {
visibility: if services().rooms.directory.is_public_room(&body.room_id)? {
room::Visibility::Public
} else {
room::Visibility::Private
},
}
.into())
})
}
pub(crate) async fn get_public_rooms_filtered_helper(
db: &Database,
server: Option<&ServerName>,
limit: Option<UInt>,
since: Option<&str>,
filter: &IncomingFilter,
_network: &IncomingRoomNetwork,
) -> ConduitResult<get_public_rooms_filtered::Response> {
if let Some(other_server) = server.filter(|server| *server != db.globals.server_name().as_str())
filter: &Filter,
_network: &RoomNetwork,
) -> Result<get_public_rooms_filtered::v3::Response> {
if let Some(other_server) =
server.filter(|server| *server != services().globals.server_name().as_str())
{
let response = db
let response = services()
.sending
.send_federation_request(
&db.globals,
other_server,
federation::directory::get_public_rooms_filtered::v1::Request {
limit,
since: since.as_deref(),
since: since.map(ToOwned::to_owned),
filter: Filter {
generic_search_term: filter.generic_search_term.as_deref(),
generic_search_term: filter.generic_search_term.clone(),
room_types: filter.room_types.clone(),
},
room_network: RoomNetwork::Matrix,
},
)
.await?;
return Ok(get_public_rooms_filtered::Response {
chunk: response
.chunk
.into_iter()
.map(|c| {
// Convert ruma::api::federation::directory::get_public_rooms::v1::PublicRoomsChunk
// to ruma::api::client::r0::directory::PublicRoomsChunk
serde_json::from_str(
&serde_json::to_string(&c)
.expect("PublicRoomsChunk::to_string always works"),
)
.expect("federation and client-server PublicRoomsChunk are the same type")
})
.collect(),
return Ok(get_public_rooms_filtered::v3::Response {
chunk: response.chunk,
prev_batch: response.prev_batch,
next_batch: response.next_batch,
total_room_count_estimate: response.total_room_count_estimate,
}
.into());
});
}
let limit = limit.map_or(10, u64::from);
@ -223,17 +183,18 @@ pub(crate) async fn get_public_rooms_filtered_helper(
}
}
let mut all_rooms: Vec<_> = db
let mut all_rooms: Vec<_> = services()
.rooms
.directory
.public_rooms()
.map(|room_id| {
let room_id = room_id?;
let chunk = PublicRoomsChunk {
aliases: Vec::new(),
canonical_alias: db
canonical_alias: services()
.rooms
.room_state_get(&room_id, &EventType::RoomCanonicalAlias, "")?
.state_accessor
.room_state_get(&room_id, &StateEventType::RoomCanonicalAlias, "")?
.map_or(Ok(None), |s| {
serde_json::from_str(s.content.get())
.map(|c: RoomCanonicalAliasEventContent| c.alias)
@ -241,18 +202,10 @@ pub(crate) async fn get_public_rooms_filtered_helper(
Error::bad_database("Invalid canonical alias event in database.")
})
})?,
name: db
.rooms
.room_state_get(&room_id, &EventType::RoomName, "")?
.map_or(Ok(None), |s| {
serde_json::from_str(s.content.get())
.map(|c: RoomNameEventContent| c.name)
.map_err(|_| {
Error::bad_database("Invalid room name event in database.")
})
})?,
num_joined_members: db
name: services().rooms.state_accessor.get_name(&room_id)?,
num_joined_members: services()
.rooms
.state_cache
.room_joined_count(&room_id)?
.unwrap_or_else(|| {
warn!("Room {} has no member count", room_id);
@ -260,19 +213,22 @@ pub(crate) async fn get_public_rooms_filtered_helper(
})
.try_into()
.expect("user count should not be that big"),
topic: db
topic: services()
.rooms
.room_state_get(&room_id, &EventType::RoomTopic, "")?
.state_accessor
.room_state_get(&room_id, &StateEventType::RoomTopic, "")?
.map_or(Ok(None), |s| {
serde_json::from_str(s.content.get())
.map(|c: RoomTopicEventContent| Some(c.topic))
.map_err(|_| {
error!("Invalid room topic event in database for room {}", room_id);
Error::bad_database("Invalid room topic event in database.")
})
})?,
world_readable: db
world_readable: services()
.rooms
.room_state_get(&room_id, &EventType::RoomHistoryVisibility, "")?
.state_accessor
.room_state_get(&room_id, &StateEventType::RoomHistoryVisibility, "")?
.map_or(Ok(false), |s| {
serde_json::from_str(s.content.get())
.map(|c: RoomHistoryVisibilityEventContent| {
@ -284,9 +240,10 @@ pub(crate) async fn get_public_rooms_filtered_helper(
)
})
})?,
guest_can_join: db
guest_can_join: services()
.rooms
.room_state_get(&room_id, &EventType::RoomGuestAccess, "")?
.state_accessor
.room_state_get(&room_id, &StateEventType::RoomGuestAccess, "")?
.map_or(Ok(false), |s| {
serde_json::from_str(s.content.get())
.map(|c: RoomGuestAccessEventContent| {
@ -296,9 +253,10 @@ pub(crate) async fn get_public_rooms_filtered_helper(
Error::bad_database("Invalid room guest access event in database.")
})
})?,
avatar_url: db
avatar_url: services()
.rooms
.room_state_get(&room_id, &EventType::RoomAvatar, "")?
.state_accessor
.room_state_get(&room_id, &StateEventType::RoomAvatar, "")?
.map(|s| {
serde_json::from_str(s.content.get())
.map(|c: RoomAvatarEventContent| c.url)
@ -309,6 +267,39 @@ pub(crate) async fn get_public_rooms_filtered_helper(
.transpose()?
// url is now an Option<String> so we must flatten
.flatten(),
join_rule: services()
.rooms
.state_accessor
.room_state_get(&room_id, &StateEventType::RoomJoinRules, "")?
.map(|s| {
serde_json::from_str(s.content.get())
.map(|c: RoomJoinRulesEventContent| match c.join_rule {
JoinRule::Public => Some(PublicRoomJoinRule::Public),
JoinRule::Knock => Some(PublicRoomJoinRule::Knock),
_ => None,
})
.map_err(|e| {
error!("Invalid room join rule event in database: {}", e);
Error::BadDatabase("Invalid room join rule event in database.")
})
})
.transpose()?
.flatten()
.ok_or_else(|| Error::bad_database("Missing room join rule event for room."))?,
room_type: services()
.rooms
.state_accessor
.room_state_get(&room_id, &StateEventType::RoomCreate, "")?
.map(|s| {
serde_json::from_str::<RoomCreateEventContent>(s.content.get()).map_err(
|e| {
error!("Invalid room create event in database: {}", e);
Error::BadDatabase("Invalid room create event in database.")
},
)
})
.transpose()?
.and_then(|e| e.room_type),
room_id,
};
Ok(chunk)
@ -360,7 +351,7 @@ pub(crate) async fn get_public_rooms_filtered_helper(
let prev_batch = if num_since == 0 {
None
} else {
Some(format!("p{}", num_since))
Some(format!("p{num_since}"))
};
let next_batch = if chunk.len() < limit as usize {
@ -369,11 +360,10 @@ pub(crate) async fn get_public_rooms_filtered_helper(
Some(format!("n{}", num_since + limit))
};
Ok(get_public_rooms_filtered::Response {
Ok(get_public_rooms_filtered::v3::Response {
chunk,
prev_batch,
next_batch,
total_room_count_estimate: Some(total_room_count_estimate),
}
.into())
})
}

View file

@ -0,0 +1,34 @@
use crate::{services, Error, Result, Ruma};
use ruma::api::client::{
error::ErrorKind,
filter::{create_filter, get_filter},
};
/// # `GET /_matrix/client/r0/user/{userId}/filter/{filterId}`
///
/// Loads a filter that was previously created.
///
/// - A user can only access their own filters
pub async fn get_filter_route(
body: Ruma<get_filter::v3::Request>,
) -> Result<get_filter::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let filter = match services().users.get_filter(sender_user, &body.filter_id)? {
Some(filter) => filter,
None => return Err(Error::BadRequest(ErrorKind::NotFound, "Filter not found.")),
};
Ok(get_filter::v3::Response::new(filter))
}
/// # `PUT /_matrix/client/r0/user/{userId}/filter`
///
/// Creates a new filter to be used by other endpoints.
pub async fn create_filter_route(
body: Ruma<create_filter::v3::Request>,
) -> Result<create_filter::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
Ok(create_filter::v3::Response::new(
services().users.create_filter(sender_user, &body.filter)?,
))
}

View file

@ -0,0 +1,536 @@
use super::SESSION_ID_LENGTH;
use crate::{services, utils, Error, Result, Ruma};
use futures_util::{stream::FuturesUnordered, StreamExt};
use ruma::{
api::{
client::{
error::ErrorKind,
keys::{
claim_keys, get_key_changes, get_keys, upload_keys, upload_signatures,
upload_signing_keys,
},
uiaa::{AuthFlow, AuthType, UiaaInfo},
},
federation,
},
serde::Raw,
DeviceKeyAlgorithm, OwnedDeviceId, OwnedUserId, UserId,
};
use serde_json::json;
use std::{
collections::{hash_map, BTreeMap, HashMap, HashSet},
time::{Duration, Instant},
};
use tracing::debug;
/// # `POST /_matrix/client/r0/keys/upload`
///
/// Publish end-to-end encryption keys for the sender device.
///
/// - Adds one time keys
/// - If there are no device keys yet: Adds device keys (TODO: merge with existing keys?)
pub async fn upload_keys_route(
body: Ruma<upload_keys::v3::Request>,
) -> Result<upload_keys::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let sender_device = body.sender_device.as_ref().expect("user is authenticated");
for (key_key, key_value) in &body.one_time_keys {
services()
.users
.add_one_time_key(sender_user, sender_device, key_key, key_value)?;
}
if let Some(device_keys) = &body.device_keys {
// TODO: merge this and the existing event?
// This check is needed to assure that signatures are kept
if services()
.users
.get_device_keys(sender_user, sender_device)?
.is_none()
{
services()
.users
.add_device_keys(sender_user, sender_device, device_keys)?;
}
}
Ok(upload_keys::v3::Response {
one_time_key_counts: services()
.users
.count_one_time_keys(sender_user, sender_device)?,
})
}
/// # `POST /_matrix/client/r0/keys/query`
///
/// Get end-to-end encryption keys for the given users.
///
/// - Always fetches users from other servers over federation
/// - Gets master keys, self-signing keys, user signing keys and device keys.
/// - The master and self-signing keys contain signatures that the user is allowed to see
pub async fn get_keys_route(body: Ruma<get_keys::v3::Request>) -> Result<get_keys::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let response =
get_keys_helper(Some(sender_user), &body.device_keys, |u| u == sender_user).await?;
Ok(response)
}
/// # `POST /_matrix/client/r0/keys/claim`
///
/// Claims one-time keys
pub async fn claim_keys_route(
body: Ruma<claim_keys::v3::Request>,
) -> Result<claim_keys::v3::Response> {
let response = claim_keys_helper(&body.one_time_keys).await?;
Ok(response)
}
/// # `POST /_matrix/client/r0/keys/device_signing/upload`
///
/// Uploads end-to-end key information for the sender user.
///
/// - Requires UIAA to verify password
pub async fn upload_signing_keys_route(
body: Ruma<upload_signing_keys::v3::Request>,
) -> Result<upload_signing_keys::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let sender_device = body.sender_device.as_ref().expect("user is authenticated");
// UIAA
let mut uiaainfo = UiaaInfo {
flows: vec![AuthFlow {
stages: vec![AuthType::Password],
}],
completed: Vec::new(),
params: Default::default(),
session: None,
auth_error: None,
};
if let Some(auth) = &body.auth {
let (worked, uiaainfo) =
services()
.uiaa
.try_auth(sender_user, sender_device, auth, &uiaainfo)?;
if !worked {
return Err(Error::Uiaa(uiaainfo));
}
// Success!
} else if let Some(json) = body.json_body {
uiaainfo.session = Some(utils::random_string(SESSION_ID_LENGTH));
services()
.uiaa
.create(sender_user, sender_device, &uiaainfo, &json)?;
return Err(Error::Uiaa(uiaainfo));
} else {
return Err(Error::BadRequest(ErrorKind::NotJson, "Not json."));
}
if let Some(master_key) = &body.master_key {
services().users.add_cross_signing_keys(
sender_user,
master_key,
&body.self_signing_key,
&body.user_signing_key,
true, // notify so that other users see the new keys
)?;
}
Ok(upload_signing_keys::v3::Response {})
}
/// # `POST /_matrix/client/r0/keys/signatures/upload`
///
/// Uploads end-to-end key signatures from the sender user.
pub async fn upload_signatures_route(
body: Ruma<upload_signatures::v3::Request>,
) -> Result<upload_signatures::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
for (user_id, keys) in &body.signed_keys {
for (key_id, key) in keys {
let key = serde_json::to_value(key)
.map_err(|_| Error::BadRequest(ErrorKind::InvalidParam, "Invalid key JSON"))?;
for signature in key
.get("signatures")
.ok_or(Error::BadRequest(
ErrorKind::InvalidParam,
"Missing signatures field.",
))?
.get(sender_user.to_string())
.ok_or(Error::BadRequest(
ErrorKind::InvalidParam,
"Invalid user in signatures field.",
))?
.as_object()
.ok_or(Error::BadRequest(
ErrorKind::InvalidParam,
"Invalid signature.",
))?
.clone()
.into_iter()
{
// Signature validation?
let signature = (
signature.0,
signature
.1
.as_str()
.ok_or(Error::BadRequest(
ErrorKind::InvalidParam,
"Invalid signature value.",
))?
.to_owned(),
);
services()
.users
.sign_key(user_id, key_id, signature, sender_user)?;
}
}
}
Ok(upload_signatures::v3::Response {
failures: BTreeMap::new(), // TODO: integrate
})
}
/// # `POST /_matrix/client/r0/keys/changes`
///
/// Gets a list of users who have updated their device identity keys since the previous sync token.
///
/// - TODO: left users
pub async fn get_key_changes_route(
body: Ruma<get_key_changes::v3::Request>,
) -> Result<get_key_changes::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let mut device_list_updates = HashSet::new();
device_list_updates.extend(
services()
.users
.keys_changed(
sender_user.as_str(),
body.from
.parse()
.map_err(|_| Error::BadRequest(ErrorKind::InvalidParam, "Invalid `from`."))?,
Some(
body.to
.parse()
.map_err(|_| Error::BadRequest(ErrorKind::InvalidParam, "Invalid `to`."))?,
),
)
.filter_map(|r| r.ok()),
);
for room_id in services()
.rooms
.state_cache
.rooms_joined(sender_user)
.filter_map(|r| r.ok())
{
device_list_updates.extend(
services()
.users
.keys_changed(
room_id.as_ref(),
body.from.parse().map_err(|_| {
Error::BadRequest(ErrorKind::InvalidParam, "Invalid `from`.")
})?,
Some(body.to.parse().map_err(|_| {
Error::BadRequest(ErrorKind::InvalidParam, "Invalid `to`.")
})?),
)
.filter_map(|r| r.ok()),
);
}
Ok(get_key_changes::v3::Response {
changed: device_list_updates.into_iter().collect(),
left: Vec::new(), // TODO
})
}
pub(crate) async fn get_keys_helper<F: Fn(&UserId) -> bool>(
sender_user: Option<&UserId>,
device_keys_input: &BTreeMap<OwnedUserId, Vec<OwnedDeviceId>>,
allowed_signatures: F,
) -> Result<get_keys::v3::Response> {
let mut master_keys = BTreeMap::new();
let mut self_signing_keys = BTreeMap::new();
let mut user_signing_keys = BTreeMap::new();
let mut device_keys = BTreeMap::new();
let mut get_over_federation = HashMap::new();
for (user_id, device_ids) in device_keys_input {
let user_id: &UserId = user_id;
if user_id.server_name() != services().globals.server_name() {
get_over_federation
.entry(user_id.server_name())
.or_insert_with(Vec::new)
.push((user_id, device_ids));
continue;
}
if device_ids.is_empty() {
let mut container = BTreeMap::new();
for device_id in services().users.all_device_ids(user_id) {
let device_id = device_id?;
if let Some(mut keys) = services().users.get_device_keys(user_id, &device_id)? {
let metadata = services()
.users
.get_device_metadata(user_id, &device_id)?
.ok_or_else(|| {
Error::bad_database("all_device_keys contained nonexistent device.")
})?;
add_unsigned_device_display_name(&mut keys, metadata)
.map_err(|_| Error::bad_database("invalid device keys in database"))?;
container.insert(device_id, keys);
}
}
device_keys.insert(user_id.to_owned(), container);
} else {
for device_id in device_ids {
let mut container = BTreeMap::new();
if let Some(mut keys) = services().users.get_device_keys(user_id, device_id)? {
let metadata = services()
.users
.get_device_metadata(user_id, device_id)?
.ok_or(Error::BadRequest(
ErrorKind::InvalidParam,
"Tried to get keys for nonexistent device.",
))?;
add_unsigned_device_display_name(&mut keys, metadata)
.map_err(|_| Error::bad_database("invalid device keys in database"))?;
container.insert(device_id.to_owned(), keys);
}
device_keys.insert(user_id.to_owned(), container);
}
}
if let Some(master_key) =
services()
.users
.get_master_key(sender_user, user_id, &allowed_signatures)?
{
master_keys.insert(user_id.to_owned(), master_key);
}
if let Some(self_signing_key) =
services()
.users
.get_self_signing_key(sender_user, user_id, &allowed_signatures)?
{
self_signing_keys.insert(user_id.to_owned(), self_signing_key);
}
if Some(user_id) == sender_user {
if let Some(user_signing_key) = services().users.get_user_signing_key(user_id)? {
user_signing_keys.insert(user_id.to_owned(), user_signing_key);
}
}
}
let mut failures = BTreeMap::new();
let back_off = |id| async {
match services()
.globals
.bad_query_ratelimiter
.write()
.await
.entry(id)
{
hash_map::Entry::Vacant(e) => {
e.insert((Instant::now(), 1));
}
hash_map::Entry::Occupied(mut e) => *e.get_mut() = (Instant::now(), e.get().1 + 1),
}
};
let mut futures: FuturesUnordered<_> = get_over_federation
.into_iter()
.map(|(server, vec)| async move {
if let Some((time, tries)) = services()
.globals
.bad_query_ratelimiter
.read()
.await
.get(server)
{
// Exponential backoff
let mut min_elapsed_duration = Duration::from_secs(30) * (*tries) * (*tries);
if min_elapsed_duration > Duration::from_secs(60 * 60 * 24) {
min_elapsed_duration = Duration::from_secs(60 * 60 * 24);
}
if time.elapsed() < min_elapsed_duration {
debug!("Backing off query from {:?}", server);
return (
server,
Err(Error::BadServerResponse("bad query, still backing off")),
);
}
}
let mut device_keys_input_fed = BTreeMap::new();
for (user_id, keys) in vec {
device_keys_input_fed.insert(user_id.to_owned(), keys.clone());
}
(
server,
tokio::time::timeout(
Duration::from_secs(25),
services().sending.send_federation_request(
server,
federation::keys::get_keys::v1::Request {
device_keys: device_keys_input_fed,
},
),
)
.await
.map_err(|_e| Error::BadServerResponse("Query took too long")),
)
})
.collect();
while let Some((server, response)) = futures.next().await {
match response {
Ok(Ok(response)) => {
for (user, masterkey) in response.master_keys {
let (master_key_id, mut master_key) =
services().users.parse_master_key(&user, &masterkey)?;
if let Some(our_master_key) = services().users.get_key(
&master_key_id,
sender_user,
&user,
&allowed_signatures,
)? {
let (_, our_master_key) =
services().users.parse_master_key(&user, &our_master_key)?;
master_key.signatures.extend(our_master_key.signatures);
}
let json = serde_json::to_value(master_key).expect("to_value always works");
let raw = serde_json::from_value(json).expect("Raw::from_value always works");
services().users.add_cross_signing_keys(
&user, &raw, &None, &None,
false, // Dont notify. A notification would trigger another key request resulting in an endless loop
)?;
master_keys.insert(user, raw);
}
self_signing_keys.extend(response.self_signing_keys);
device_keys.extend(response.device_keys);
}
_ => {
back_off(server.to_owned()).await;
failures.insert(server.to_string(), json!({}));
}
}
}
Ok(get_keys::v3::Response {
master_keys,
self_signing_keys,
user_signing_keys,
device_keys,
failures,
})
}
fn add_unsigned_device_display_name(
keys: &mut Raw<ruma::encryption::DeviceKeys>,
metadata: ruma::api::client::device::Device,
) -> serde_json::Result<()> {
if let Some(display_name) = metadata.display_name {
let mut object = keys.deserialize_as::<serde_json::Map<String, serde_json::Value>>()?;
let unsigned = object.entry("unsigned").or_insert_with(|| json!({}));
if let serde_json::Value::Object(unsigned_object) = unsigned {
unsigned_object.insert("device_display_name".to_owned(), display_name.into());
}
*keys = Raw::from_json(serde_json::value::to_raw_value(&object)?);
}
Ok(())
}
pub(crate) async fn claim_keys_helper(
one_time_keys_input: &BTreeMap<OwnedUserId, BTreeMap<OwnedDeviceId, DeviceKeyAlgorithm>>,
) -> Result<claim_keys::v3::Response> {
let mut one_time_keys = BTreeMap::new();
let mut get_over_federation = BTreeMap::new();
for (user_id, map) in one_time_keys_input {
if user_id.server_name() != services().globals.server_name() {
get_over_federation
.entry(user_id.server_name())
.or_insert_with(Vec::new)
.push((user_id, map));
}
let mut container = BTreeMap::new();
for (device_id, key_algorithm) in map {
if let Some(one_time_keys) =
services()
.users
.take_one_time_key(user_id, device_id, key_algorithm)?
{
let mut c = BTreeMap::new();
c.insert(one_time_keys.0, one_time_keys.1);
container.insert(device_id.clone(), c);
}
}
one_time_keys.insert(user_id.clone(), container);
}
let mut failures = BTreeMap::new();
let mut futures: FuturesUnordered<_> = get_over_federation
.into_iter()
.map(|(server, vec)| async move {
let mut one_time_keys_input_fed = BTreeMap::new();
for (user_id, keys) in vec {
one_time_keys_input_fed.insert(user_id.clone(), keys.clone());
}
(
server,
services()
.sending
.send_federation_request(
server,
federation::keys::claim_keys::v1::Request {
one_time_keys: one_time_keys_input_fed,
},
)
.await,
)
})
.collect();
while let Some((server, response)) = futures.next().await {
match response {
Ok(keys) => {
one_time_keys.extend(keys.one_time_keys);
}
Err(_e) => {
failures.insert(server.to_string(), json!({}));
}
}
}
Ok(claim_keys::v3::Response {
failures,
one_time_keys,
})
}

View file

@ -0,0 +1,467 @@
// Unauthenticated media is deprecated
#![allow(deprecated)]
use std::time::Duration;
use crate::{service::media::FileMeta, services, utils, Error, Result, Ruma};
use http::header::{CONTENT_DISPOSITION, CONTENT_TYPE};
use ruma::{
api::{
client::{
authenticated_media::{
get_content, get_content_as_filename, get_content_thumbnail, get_media_config,
},
error::ErrorKind,
media::{self, create_content},
},
federation::authenticated_media::{self as federation_media, FileOrLocation},
},
http_headers::{ContentDisposition, ContentDispositionType},
media::Method,
ServerName, UInt,
};
const MXC_LENGTH: usize = 32;
/// # `GET /_matrix/media/r0/config`
///
/// Returns max upload size.
pub async fn get_media_config_route(
_body: Ruma<media::get_media_config::v3::Request>,
) -> Result<media::get_media_config::v3::Response> {
Ok(media::get_media_config::v3::Response {
upload_size: services().globals.max_request_size().into(),
})
}
/// # `GET /_matrix/client/v1/media/config`
///
/// Returns max upload size.
pub async fn get_media_config_auth_route(
_body: Ruma<get_media_config::v1::Request>,
) -> Result<get_media_config::v1::Response> {
Ok(get_media_config::v1::Response {
upload_size: services().globals.max_request_size().into(),
})
}
/// # `POST /_matrix/media/r0/upload`
///
/// Permanently save media in the server.
///
/// - Some metadata will be saved in the database
/// - Media will be saved in the media/ directory
pub async fn create_content_route(
body: Ruma<create_content::v3::Request>,
) -> Result<create_content::v3::Response> {
let mxc = format!(
"mxc://{}/{}",
services().globals.server_name(),
utils::random_string(MXC_LENGTH)
);
services()
.media
.create(
mxc.clone(),
Some(
ContentDisposition::new(ContentDispositionType::Inline)
.with_filename(body.filename.clone()),
),
body.content_type.as_deref(),
&body.file,
)
.await?;
Ok(create_content::v3::Response {
content_uri: mxc.into(),
blurhash: None,
})
}
pub async fn get_remote_content(
mxc: &str,
server_name: &ServerName,
media_id: String,
) -> Result<get_content::v1::Response, Error> {
let content_response = match services()
.sending
.send_federation_request(
server_name,
federation_media::get_content::v1::Request {
media_id: media_id.clone(),
timeout_ms: Duration::from_secs(20),
},
)
.await
{
Ok(federation_media::get_content::v1::Response {
metadata: _,
content: FileOrLocation::File(content),
}) => get_content::v1::Response {
file: content.file,
content_type: content.content_type,
content_disposition: content.content_disposition,
},
Ok(federation_media::get_content::v1::Response {
metadata: _,
content: FileOrLocation::Location(url),
}) => get_location_content(url).await?,
Err(Error::BadRequest(ErrorKind::Unrecognized, _)) => {
let media::get_content::v3::Response {
file,
content_type,
content_disposition,
..
} = services()
.sending
.send_federation_request(
server_name,
media::get_content::v3::Request {
server_name: server_name.to_owned(),
media_id,
timeout_ms: Duration::from_secs(20),
allow_remote: false,
allow_redirect: true,
},
)
.await?;
get_content::v1::Response {
file,
content_type,
content_disposition,
}
}
Err(e) => return Err(e),
};
services()
.media
.create(
mxc.to_owned(),
content_response.content_disposition.clone(),
content_response.content_type.as_deref(),
&content_response.file,
)
.await?;
Ok(content_response)
}
/// # `GET /_matrix/media/r0/download/{serverName}/{mediaId}`
///
/// Load media from our server or over federation.
///
/// - Only allows federation if `allow_remote` is true
pub async fn get_content_route(
body: Ruma<media::get_content::v3::Request>,
) -> Result<media::get_content::v3::Response> {
let get_content::v1::Response {
file,
content_disposition,
content_type,
} = get_content(&body.server_name, body.media_id.clone(), body.allow_remote).await?;
Ok(media::get_content::v3::Response {
file,
content_type,
content_disposition,
cross_origin_resource_policy: Some("cross-origin".to_owned()),
})
}
/// # `GET /_matrix/client/v1/media/download/{serverName}/{mediaId}`
///
/// Load media from our server or over federation.
pub async fn get_content_auth_route(
body: Ruma<get_content::v1::Request>,
) -> Result<get_content::v1::Response> {
get_content(&body.server_name, body.media_id.clone(), true).await
}
async fn get_content(
server_name: &ServerName,
media_id: String,
allow_remote: bool,
) -> Result<get_content::v1::Response, Error> {
let mxc = format!("mxc://{}/{}", server_name, media_id);
if let Ok(Some(FileMeta {
content_disposition,
content_type,
file,
})) = services().media.get(mxc.clone()).await
{
Ok(get_content::v1::Response {
file,
content_type,
content_disposition: Some(content_disposition),
})
} else if server_name != services().globals.server_name() && allow_remote {
let remote_content_response =
get_remote_content(&mxc, server_name, media_id.clone()).await?;
Ok(get_content::v1::Response {
content_disposition: remote_content_response.content_disposition,
content_type: remote_content_response.content_type,
file: remote_content_response.file,
})
} else {
Err(Error::BadRequest(ErrorKind::NotFound, "Media not found."))
}
}
/// # `GET /_matrix/media/r0/download/{serverName}/{mediaId}/{fileName}`
///
/// Load media from our server or over federation, permitting desired filename.
///
/// - Only allows federation if `allow_remote` is true
pub async fn get_content_as_filename_route(
body: Ruma<media::get_content_as_filename::v3::Request>,
) -> Result<media::get_content_as_filename::v3::Response> {
let get_content_as_filename::v1::Response {
file,
content_type,
content_disposition,
} = get_content_as_filename(
&body.server_name,
body.media_id.clone(),
body.filename.clone(),
body.allow_remote,
)
.await?;
Ok(media::get_content_as_filename::v3::Response {
file,
content_type,
content_disposition,
cross_origin_resource_policy: Some("cross-origin".to_owned()),
})
}
/// # `GET /_matrix/client/v1/media/download/{serverName}/{mediaId}/{fileName}`
///
/// Load media from our server or over federation, permitting desired filename.
pub async fn get_content_as_filename_auth_route(
body: Ruma<get_content_as_filename::v1::Request>,
) -> Result<get_content_as_filename::v1::Response, Error> {
get_content_as_filename(
&body.server_name,
body.media_id.clone(),
body.filename.clone(),
true,
)
.await
}
async fn get_content_as_filename(
server_name: &ServerName,
media_id: String,
filename: String,
allow_remote: bool,
) -> Result<get_content_as_filename::v1::Response, Error> {
let mxc = format!("mxc://{}/{}", server_name, media_id);
if let Ok(Some(FileMeta {
file, content_type, ..
})) = services().media.get(mxc.clone()).await
{
Ok(get_content_as_filename::v1::Response {
file,
content_type,
content_disposition: Some(
ContentDisposition::new(ContentDispositionType::Inline)
.with_filename(Some(filename.clone())),
),
})
} else if server_name != services().globals.server_name() && allow_remote {
let remote_content_response =
get_remote_content(&mxc, server_name, media_id.clone()).await?;
Ok(get_content_as_filename::v1::Response {
content_disposition: Some(
ContentDisposition::new(ContentDispositionType::Inline)
.with_filename(Some(filename.clone())),
),
content_type: remote_content_response.content_type,
file: remote_content_response.file,
})
} else {
Err(Error::BadRequest(ErrorKind::NotFound, "Media not found."))
}
}
/// # `GET /_matrix/media/r0/thumbnail/{serverName}/{mediaId}`
///
/// Load media thumbnail from our server or over federation.
///
/// - Only allows federation if `allow_remote` is true
pub async fn get_content_thumbnail_route(
body: Ruma<media::get_content_thumbnail::v3::Request>,
) -> Result<media::get_content_thumbnail::v3::Response> {
let get_content_thumbnail::v1::Response { file, content_type } = get_content_thumbnail(
&body.server_name,
body.media_id.clone(),
body.height,
body.width,
body.method.clone(),
body.animated,
body.allow_remote,
)
.await?;
Ok(media::get_content_thumbnail::v3::Response {
file,
content_type,
cross_origin_resource_policy: Some("cross-origin".to_owned()),
})
}
/// # `GET /_matrix/client/v1/media/thumbnail/{serverName}/{mediaId}`
///
/// Load media thumbnail from our server or over federation.
pub async fn get_content_thumbnail_auth_route(
body: Ruma<get_content_thumbnail::v1::Request>,
) -> Result<get_content_thumbnail::v1::Response> {
get_content_thumbnail(
&body.server_name,
body.media_id.clone(),
body.height,
body.width,
body.method.clone(),
body.animated,
true,
)
.await
}
async fn get_content_thumbnail(
server_name: &ServerName,
media_id: String,
height: UInt,
width: UInt,
method: Option<Method>,
animated: Option<bool>,
allow_remote: bool,
) -> Result<get_content_thumbnail::v1::Response, Error> {
let mxc = format!("mxc://{}/{}", server_name, media_id);
if let Ok(Some(FileMeta {
file, content_type, ..
})) = services()
.media
.get_thumbnail(
mxc.clone(),
width
.try_into()
.map_err(|_| Error::BadRequest(ErrorKind::InvalidParam, "Width is invalid."))?,
height
.try_into()
.map_err(|_| Error::BadRequest(ErrorKind::InvalidParam, "Height is invalid."))?,
)
.await
{
Ok(get_content_thumbnail::v1::Response { file, content_type })
} else if server_name != services().globals.server_name() && allow_remote {
let thumbnail_response = match services()
.sending
.send_federation_request(
server_name,
federation_media::get_content_thumbnail::v1::Request {
height,
width,
method: method.clone(),
media_id: media_id.clone(),
timeout_ms: Duration::from_secs(20),
animated,
},
)
.await
{
Ok(federation_media::get_content_thumbnail::v1::Response {
metadata: _,
content: FileOrLocation::File(content),
}) => get_content_thumbnail::v1::Response {
file: content.file,
content_type: content.content_type,
},
Ok(federation_media::get_content_thumbnail::v1::Response {
metadata: _,
content: FileOrLocation::Location(url),
}) => {
let get_content::v1::Response {
file, content_type, ..
} = get_location_content(url).await?;
get_content_thumbnail::v1::Response { file, content_type }
}
Err(Error::BadRequest(ErrorKind::Unrecognized, _)) => {
let media::get_content_thumbnail::v3::Response {
file, content_type, ..
} = services()
.sending
.send_federation_request(
server_name,
media::get_content_thumbnail::v3::Request {
height,
width,
method: method.clone(),
server_name: server_name.to_owned(),
media_id: media_id.clone(),
timeout_ms: Duration::from_secs(20),
allow_redirect: false,
animated,
allow_remote: false,
},
)
.await?;
get_content_thumbnail::v1::Response { file, content_type }
}
Err(e) => return Err(e),
};
services()
.media
.upload_thumbnail(
mxc,
thumbnail_response.content_type.as_deref(),
width.try_into().expect("all UInts are valid u32s"),
height.try_into().expect("all UInts are valid u32s"),
&thumbnail_response.file,
)
.await?;
Ok(thumbnail_response)
} else {
Err(Error::BadRequest(ErrorKind::NotFound, "Media not found."))
}
}
async fn get_location_content(url: String) -> Result<get_content::v1::Response, Error> {
let client = services().globals.default_client();
let response = client.get(url).send().await?;
let headers = response.headers();
let content_type = headers
.get(CONTENT_TYPE)
.and_then(|header| header.to_str().ok())
.map(ToOwned::to_owned);
let content_disposition = headers
.get(CONTENT_DISPOSITION)
.map(|header| header.as_bytes())
.map(TryFrom::try_from)
.and_then(Result::ok);
let file = response.bytes().await?.to_vec();
Ok(get_content::v1::Response {
file,
content_type,
content_disposition,
})
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,273 @@
use crate::{
service::{pdu::PduBuilder, rooms::timeline::PduCount},
services, utils, Error, Result, Ruma,
};
use ruma::{
api::client::{
error::ErrorKind,
message::{get_message_events, send_message_event},
},
events::{StateEventType, TimelineEventType},
};
use std::{
collections::{BTreeMap, HashSet},
sync::Arc,
};
/// # `PUT /_matrix/client/r0/rooms/{roomId}/send/{eventType}/{txnId}`
///
/// Send a message event into the room.
///
/// - Is a NOOP if the txn id was already used before and returns the same event id again
/// - The only requirement for the content is that it has to be valid json
/// - Tries to send the event into the room, auth rules will determine if it is allowed
pub async fn send_message_event_route(
body: Ruma<send_message_event::v3::Request>,
) -> Result<send_message_event::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let sender_device = body.sender_device.as_deref();
let mutex_state = Arc::clone(
services()
.globals
.roomid_mutex_state
.write()
.await
.entry(body.room_id.clone())
.or_default(),
);
let state_lock = mutex_state.lock().await;
// Forbid m.room.encrypted if encryption is disabled
if TimelineEventType::RoomEncrypted == body.event_type.to_string().into()
&& !services().globals.allow_encryption()
{
return Err(Error::BadRequest(
ErrorKind::forbidden(),
"Encryption has been disabled",
));
}
// Check if this is a new transaction id
if let Some(response) =
services()
.transaction_ids
.existing_txnid(sender_user, sender_device, &body.txn_id)?
{
// The client might have sent a txnid of the /sendToDevice endpoint
// This txnid has no response associated with it
if response.is_empty() {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Tried to use txn id already used for an incompatible endpoint.",
));
}
let event_id = utils::string_from_bytes(&response)
.map_err(|_| Error::bad_database("Invalid txnid bytes in database."))?
.try_into()
.map_err(|_| Error::bad_database("Invalid event id in txnid data."))?;
return Ok(send_message_event::v3::Response { event_id });
}
let mut unsigned = BTreeMap::new();
unsigned.insert("transaction_id".to_owned(), body.txn_id.to_string().into());
let event_id = services()
.rooms
.timeline
.build_and_append_pdu(
PduBuilder {
event_type: body.event_type.to_string().into(),
content: serde_json::from_str(body.body.body.json().get())
.map_err(|_| Error::BadRequest(ErrorKind::BadJson, "Invalid JSON body."))?,
unsigned: Some(unsigned),
state_key: None,
redacts: None,
timestamp: if body.appservice_info.is_some() {
body.timestamp
} else {
None
},
},
sender_user,
&body.room_id,
&state_lock,
)
.await?;
services().transaction_ids.add_txnid(
sender_user,
sender_device,
&body.txn_id,
event_id.as_bytes(),
)?;
drop(state_lock);
Ok(send_message_event::v3::Response::new(
(*event_id).to_owned(),
))
}
/// # `GET /_matrix/client/r0/rooms/{roomId}/messages`
///
/// Allows paginating through room history.
///
/// - Only works if the user is joined (TODO: always allow, but only show events where the user was
/// joined, depending on history_visibility)
pub async fn get_message_events_route(
body: Ruma<get_message_events::v3::Request>,
) -> Result<get_message_events::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let sender_device = body.sender_device.as_ref().expect("user is authenticated");
let from = match body.from.clone() {
Some(from) => PduCount::try_from_string(&from)?,
None => match body.dir {
ruma::api::Direction::Forward => PduCount::min(),
ruma::api::Direction::Backward => PduCount::max(),
},
};
let to = body
.to
.as_ref()
.and_then(|t| PduCount::try_from_string(t).ok());
services()
.rooms
.lazy_loading
.lazy_load_confirm_delivery(sender_user, sender_device, &body.room_id, from)
.await?;
let limit = u64::from(body.limit).min(100) as usize;
let next_token;
let mut resp = get_message_events::v3::Response::new();
let mut lazy_loaded = HashSet::new();
match body.dir {
ruma::api::Direction::Forward => {
let events_after: Vec<_> = services()
.rooms
.timeline
.pdus_after(sender_user, &body.room_id, from)?
.take(limit)
.filter_map(|r| r.ok()) // Filter out buggy events
.filter(|(_, pdu)| {
services()
.rooms
.state_accessor
.user_can_see_event(sender_user, &body.room_id, &pdu.event_id)
.unwrap_or(false)
})
.take_while(|&(k, _)| Some(k) != to) // Stop at `to`
.collect();
for (_, event) in &events_after {
/* TODO: Remove this when these are resolved:
* https://github.com/vector-im/element-android/issues/3417
* https://github.com/vector-im/element-web/issues/21034
if !services().rooms.lazy_loading.lazy_load_was_sent_before(
sender_user,
sender_device,
&body.room_id,
&event.sender,
)? {
lazy_loaded.insert(event.sender.clone());
}
*/
lazy_loaded.insert(event.sender.clone());
}
next_token = events_after.last().map(|(count, _)| count).copied();
let events_after: Vec<_> = events_after
.into_iter()
.map(|(_, pdu)| pdu.to_room_event())
.collect();
resp.start = from.stringify();
resp.end = next_token.map(|count| count.stringify());
resp.chunk = events_after;
}
ruma::api::Direction::Backward => {
services()
.rooms
.timeline
.backfill_if_required(&body.room_id, from)
.await?;
let events_before: Vec<_> = services()
.rooms
.timeline
.pdus_until(sender_user, &body.room_id, from)?
.take(limit)
.filter_map(|r| r.ok()) // Filter out buggy events
.filter(|(_, pdu)| {
services()
.rooms
.state_accessor
.user_can_see_event(sender_user, &body.room_id, &pdu.event_id)
.unwrap_or(false)
})
.take_while(|&(k, _)| Some(k) != to) // Stop at `to`
.collect();
for (_, event) in &events_before {
/* TODO: Remove this when these are resolved:
* https://github.com/vector-im/element-android/issues/3417
* https://github.com/vector-im/element-web/issues/21034
if !services().rooms.lazy_loading.lazy_load_was_sent_before(
sender_user,
sender_device,
&body.room_id,
&event.sender,
)? {
lazy_loaded.insert(event.sender.clone());
}
*/
lazy_loaded.insert(event.sender.clone());
}
next_token = events_before.last().map(|(count, _)| count).copied();
let events_before: Vec<_> = events_before
.into_iter()
.map(|(_, pdu)| pdu.to_room_event())
.collect();
resp.start = from.stringify();
resp.end = next_token.map(|count| count.stringify());
resp.chunk = events_before;
}
}
resp.state = Vec::new();
for ll_id in &lazy_loaded {
if let Some(member_event) = services().rooms.state_accessor.room_state_get(
&body.room_id,
&StateEventType::RoomMember,
ll_id.as_str(),
)? {
resp.state.push(member_event.to_state_event());
}
}
// TODO: enable again when we are sure clients can handle it
/*
if let Some(next_token) = next_token {
services().rooms.lazy_loading.lazy_load_mark_sent(
sender_user,
sender_device,
&body.room_id,
lazy_loaded,
next_token,
);
}
*/
Ok(resp)
}

View file

@ -11,24 +11,29 @@ mod keys;
mod media;
mod membership;
mod message;
mod openid;
mod presence;
mod profile;
mod push;
mod read_marker;
mod redact;
mod relations;
mod report;
mod room;
mod search;
mod session;
mod space;
mod state;
mod sync;
mod tag;
mod thirdparty;
mod threads;
mod to_device;
mod typing;
mod unversioned;
mod user_directory;
mod voip;
mod well_known;
pub use account::*;
pub use alias::*;
@ -43,42 +48,31 @@ pub use keys::*;
pub use media::*;
pub use membership::*;
pub use message::*;
pub use openid::*;
pub use presence::*;
pub use profile::*;
pub use push::*;
pub use read_marker::*;
pub use redact::*;
pub use relations::*;
pub use report::*;
pub use room::*;
pub use search::*;
pub use session::*;
pub use space::*;
pub use state::*;
pub use sync::*;
pub use tag::*;
pub use thirdparty::*;
pub use threads::*;
pub use to_device::*;
pub use typing::*;
pub use unversioned::*;
pub use user_directory::*;
pub use voip::*;
#[cfg(not(feature = "conduit_bin"))]
use super::State;
#[cfg(feature = "conduit_bin")]
use {
crate::ConduitResult, rocket::options, ruma::api::client::r0::to_device::send_event_to_device,
};
pub use well_known::*;
pub const DEVICE_ID_LENGTH: usize = 10;
pub const TOKEN_LENGTH: usize = 256;
pub const SESSION_ID_LENGTH: usize = 256;
/// # `OPTIONS`
///
/// Web clients use this to get CORS headers.
#[cfg(feature = "conduit_bin")]
#[options("/<_..>")]
#[tracing::instrument]
pub async fn options_route() -> ConduitResult<send_event_to_device::Response> {
Ok(send_event_to_device::Response {}.into())
}
pub const TOKEN_LENGTH: usize = 32;
pub const SESSION_ID_LENGTH: usize = 32;
pub const AUTO_GEN_PASSWORD_LENGTH: usize = 15;

View file

@ -0,0 +1,23 @@
use std::time::Duration;
use ruma::{api::client::account, authentication::TokenType};
use crate::{services, Result, Ruma};
/// # `POST /_matrix/client/r0/user/{userId}/openid/request_token`
///
/// Request an OpenID token to verify identity with third-party services.
///
/// - The token generated is only valid for the OpenID API.
pub async fn create_openid_token_route(
body: Ruma<account::request_openid_token::v3::Request>,
) -> Result<account::request_openid_token::v3::Response> {
let (access_token, expires_in) = services().users.create_openid_token(&body.user_id)?;
Ok(account::request_openid_token::v3::Response {
access_token,
token_type: TokenType::Bearer,
matrix_server_name: services().globals.server_name().to_owned(),
expires_in: Duration::from_secs(expires_in),
})
}

View file

@ -1,35 +1,29 @@
use crate::{database::DatabaseGuard, utils, ConduitResult, Ruma};
use ruma::api::client::r0::presence::{get_presence, set_presence};
use std::{convert::TryInto, time::Duration};
#[cfg(feature = "conduit_bin")]
use rocket::{get, put};
use crate::{services, utils, Error, Result, Ruma};
use ruma::api::client::{
error::ErrorKind,
presence::{get_presence, set_presence},
};
use std::time::Duration;
/// # `PUT /_matrix/client/r0/presence/{userId}/status`
///
/// Sets the presence state of the sender user.
#[cfg_attr(
feature = "conduit_bin",
put("/_matrix/client/r0/presence/<_>/status", data = "<body>")
)]
#[tracing::instrument(skip(db, body))]
pub async fn set_presence_route(
db: DatabaseGuard,
body: Ruma<set_presence::Request<'_>>,
) -> ConduitResult<set_presence::Response> {
body: Ruma<set_presence::v3::Request>,
) -> Result<set_presence::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
for room_id in db.rooms.rooms_joined(sender_user) {
for room_id in services().rooms.state_cache.rooms_joined(sender_user) {
let room_id = room_id?;
db.rooms.edus.update_presence(
services().rooms.edus.presence.update_presence(
sender_user,
&room_id,
ruma::events::presence::PresenceEvent {
content: ruma::events::presence::PresenceEventContent {
avatar_url: db.users.avatar_url(sender_user)?,
avatar_url: services().users.avatar_url(sender_user)?,
currently_active: None,
displayname: db.users.displayname(sender_user)?,
displayname: services().users.displayname(sender_user)?,
last_active_ago: Some(
utils::millis_since_unix_epoch()
.try_into()
@ -40,13 +34,10 @@ pub async fn set_presence_route(
},
sender: sender_user.clone(),
},
&db.globals,
)?;
}
db.flush()?;
Ok(set_presence::Response {}.into())
Ok(set_presence::v3::Response {})
}
/// # `GET /_matrix/client/r0/presence/{userId}/status`
@ -54,28 +45,24 @@ pub async fn set_presence_route(
/// Gets the presence state of the given user.
///
/// - Only works if you share a room with the user
#[cfg_attr(
feature = "conduit_bin",
get("/_matrix/client/r0/presence/<_>/status", data = "<body>")
)]
#[tracing::instrument(skip(db, body))]
pub async fn get_presence_route(
db: DatabaseGuard,
body: Ruma<get_presence::Request<'_>>,
) -> ConduitResult<get_presence::Response> {
body: Ruma<get_presence::v3::Request>,
) -> Result<get_presence::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let mut presence_event = None;
for room_id in db
for room_id in services()
.rooms
.user
.get_shared_rooms(vec![sender_user.clone(), body.user_id.clone()])?
{
let room_id = room_id?;
if let Some(presence) = db
if let Some(presence) = services()
.rooms
.edus
.presence
.get_last_presence_event(sender_user, &room_id)?
{
presence_event = Some(presence);
@ -84,7 +71,7 @@ pub async fn get_presence_route(
}
if let Some(presence) = presence_event {
Ok(get_presence::Response {
Ok(get_presence::v3::Response {
// TODO: Should ruma just use the presenceeventcontent type here?
status_msg: presence.content.status_msg,
currently_active: presence.content.currently_active,
@ -93,9 +80,11 @@ pub async fn get_presence_route(
.last_active_ago
.map(|millis| Duration::from_millis(millis.into())),
presence: presence.content.presence,
}
.into())
})
} else {
todo!();
Err(Error::BadRequest(
ErrorKind::NotFound,
"Presence state for this user was not found",
))
}
}

View file

@ -1,58 +1,54 @@
use crate::{database::DatabaseGuard, pdu::PduBuilder, utils, ConduitResult, Error, Ruma};
use crate::{service::pdu::PduBuilder, services, utils, Error, Result, Ruma};
use ruma::{
api::{
client::{
error::ErrorKind,
r0::profile::{
profile::{
get_avatar_url, get_display_name, get_profile, set_avatar_url, set_display_name,
},
},
federation::{self, query::get_profile_information::v1::ProfileField},
},
events::{room::member::RoomMemberEventContent, EventType},
events::{room::member::RoomMemberEventContent, StateEventType, TimelineEventType},
};
use serde_json::value::to_raw_value;
use std::{convert::TryInto, sync::Arc};
#[cfg(feature = "conduit_bin")]
use rocket::{get, put};
use std::sync::Arc;
/// # `PUT /_matrix/client/r0/profile/{userId}/displayname`
///
/// Updates the displayname.
///
/// - Also makes sure other users receive the update using presence EDUs
#[cfg_attr(
feature = "conduit_bin",
put("/_matrix/client/r0/profile/<_>/displayname", data = "<body>")
)]
#[tracing::instrument(skip(db, body))]
pub async fn set_displayname_route(
db: DatabaseGuard,
body: Ruma<set_display_name::Request<'_>>,
) -> ConduitResult<set_display_name::Response> {
body: Ruma<set_display_name::v3::Request>,
) -> Result<set_display_name::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
db.users
services()
.users
.set_displayname(sender_user, body.displayname.clone())?;
// Send a new membership event and presence update into all joined rooms
let all_rooms_joined: Vec<_> = db
let all_rooms_joined: Vec<_> = services()
.rooms
.state_cache
.rooms_joined(sender_user)
.filter_map(|r| r.ok())
.map(|room_id| {
Ok::<_, Error>((
PduBuilder {
event_type: EventType::RoomMember,
event_type: TimelineEventType::RoomMember,
content: to_raw_value(&RoomMemberEventContent {
displayname: body.displayname.clone(),
join_authorized_via_users_server: None,
..serde_json::from_str(
db.rooms
services()
.rooms
.state_accessor
.room_state_get(
&room_id,
&EventType::RoomMember,
&sender_user.to_string(),
&StateEventType::RoomMember,
sender_user.as_str(),
)?
.ok_or_else(|| {
Error::bad_database(
@ -69,6 +65,7 @@ pub async fn set_displayname_route(
unsigned: None,
state_key: Some(sender_user.to_string()),
redacts: None,
timestamp: None,
},
room_id,
))
@ -78,28 +75,31 @@ pub async fn set_displayname_route(
for (pdu_builder, room_id) in all_rooms_joined {
let mutex_state = Arc::clone(
db.globals
services()
.globals
.roomid_mutex_state
.write()
.unwrap()
.await
.entry(room_id.clone())
.or_default(),
);
let state_lock = mutex_state.lock().await;
let _ = db
let _ = services()
.rooms
.build_and_append_pdu(pdu_builder, sender_user, &room_id, &db, &state_lock);
.timeline
.build_and_append_pdu(pdu_builder, sender_user, &room_id, &state_lock)
.await;
// Presence update
db.rooms.edus.update_presence(
services().rooms.edus.presence.update_presence(
sender_user,
&room_id,
ruma::events::presence::PresenceEvent {
content: ruma::events::presence::PresenceEventContent {
avatar_url: db.users.avatar_url(sender_user)?,
avatar_url: services().users.avatar_url(sender_user)?,
currently_active: None,
displayname: db.users.displayname(sender_user)?,
displayname: services().users.displayname(sender_user)?,
last_active_ago: Some(
utils::millis_since_unix_epoch()
.try_into()
@ -110,13 +110,10 @@ pub async fn set_displayname_route(
},
sender: sender_user.clone(),
},
&db.globals,
)?;
}
db.flush()?;
Ok(set_display_name::Response {}.into())
Ok(set_display_name::v3::Response {})
}
/// # `GET /_matrix/client/r0/profile/{userId}/displayname`
@ -124,38 +121,29 @@ pub async fn set_displayname_route(
/// Returns the displayname of the user.
///
/// - If user is on another server: Fetches displayname over federation
#[cfg_attr(
feature = "conduit_bin",
get("/_matrix/client/r0/profile/<_>/displayname", data = "<body>")
)]
#[tracing::instrument(skip(db, body))]
pub async fn get_displayname_route(
db: DatabaseGuard,
body: Ruma<get_display_name::Request<'_>>,
) -> ConduitResult<get_display_name::Response> {
if body.user_id.server_name() != db.globals.server_name() {
let response = db
body: Ruma<get_display_name::v3::Request>,
) -> Result<get_display_name::v3::Response> {
if body.user_id.server_name() != services().globals.server_name() {
let response = services()
.sending
.send_federation_request(
&db.globals,
body.user_id.server_name(),
federation::query::get_profile_information::v1::Request {
user_id: &body.user_id,
field: Some(&ProfileField::DisplayName),
user_id: body.user_id.clone(),
field: Some(ProfileField::DisplayName),
},
)
.await?;
return Ok(get_display_name::Response {
return Ok(get_display_name::v3::Response {
displayname: response.displayname,
}
.into());
});
}
Ok(get_display_name::Response {
displayname: db.users.displayname(&body.user_id)?,
}
.into())
Ok(get_display_name::v3::Response {
displayname: services().users.displayname(&body.user_id)?,
})
}
/// # `PUT /_matrix/client/r0/profile/{userId}/avatar_url`
@ -163,39 +151,40 @@ pub async fn get_displayname_route(
/// Updates the avatar_url and blurhash.
///
/// - Also makes sure other users receive the update using presence EDUs
#[cfg_attr(
feature = "conduit_bin",
put("/_matrix/client/r0/profile/<_>/avatar_url", data = "<body>")
)]
#[tracing::instrument(skip(db, body))]
pub async fn set_avatar_url_route(
db: DatabaseGuard,
body: Ruma<set_avatar_url::Request<'_>>,
) -> ConduitResult<set_avatar_url::Response> {
body: Ruma<set_avatar_url::v3::Request>,
) -> Result<set_avatar_url::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
db.users
services()
.users
.set_avatar_url(sender_user, body.avatar_url.clone())?;
db.users.set_blurhash(sender_user, body.blurhash.clone())?;
services()
.users
.set_blurhash(sender_user, body.blurhash.clone())?;
// Send a new membership event and presence update into all joined rooms
let all_joined_rooms: Vec<_> = db
let all_joined_rooms: Vec<_> = services()
.rooms
.state_cache
.rooms_joined(sender_user)
.filter_map(|r| r.ok())
.map(|room_id| {
Ok::<_, Error>((
PduBuilder {
event_type: EventType::RoomMember,
event_type: TimelineEventType::RoomMember,
content: to_raw_value(&RoomMemberEventContent {
avatar_url: body.avatar_url.clone(),
join_authorized_via_users_server: None,
..serde_json::from_str(
db.rooms
services()
.rooms
.state_accessor
.room_state_get(
&room_id,
&EventType::RoomMember,
&sender_user.to_string(),
&StateEventType::RoomMember,
sender_user.as_str(),
)?
.ok_or_else(|| {
Error::bad_database(
@ -212,6 +201,7 @@ pub async fn set_avatar_url_route(
unsigned: None,
state_key: Some(sender_user.to_string()),
redacts: None,
timestamp: None,
},
room_id,
))
@ -221,28 +211,31 @@ pub async fn set_avatar_url_route(
for (pdu_builder, room_id) in all_joined_rooms {
let mutex_state = Arc::clone(
db.globals
services()
.globals
.roomid_mutex_state
.write()
.unwrap()
.await
.entry(room_id.clone())
.or_default(),
);
let state_lock = mutex_state.lock().await;
let _ = db
let _ = services()
.rooms
.build_and_append_pdu(pdu_builder, sender_user, &room_id, &db, &state_lock);
.timeline
.build_and_append_pdu(pdu_builder, sender_user, &room_id, &state_lock)
.await;
// Presence update
db.rooms.edus.update_presence(
services().rooms.edus.presence.update_presence(
sender_user,
&room_id,
ruma::events::presence::PresenceEvent {
content: ruma::events::presence::PresenceEventContent {
avatar_url: db.users.avatar_url(sender_user)?,
avatar_url: services().users.avatar_url(sender_user)?,
currently_active: None,
displayname: db.users.displayname(sender_user)?,
displayname: services().users.displayname(sender_user)?,
last_active_ago: Some(
utils::millis_since_unix_epoch()
.try_into()
@ -253,13 +246,10 @@ pub async fn set_avatar_url_route(
},
sender: sender_user.clone(),
},
&db.globals,
)?;
}
db.flush()?;
Ok(set_avatar_url::Response {}.into())
Ok(set_avatar_url::v3::Response {})
}
/// # `GET /_matrix/client/r0/profile/{userId}/avatar_url`
@ -267,40 +257,31 @@ pub async fn set_avatar_url_route(
/// Returns the avatar_url and blurhash of the user.
///
/// - If user is on another server: Fetches avatar_url and blurhash over federation
#[cfg_attr(
feature = "conduit_bin",
get("/_matrix/client/r0/profile/<_>/avatar_url", data = "<body>")
)]
#[tracing::instrument(skip(db, body))]
pub async fn get_avatar_url_route(
db: DatabaseGuard,
body: Ruma<get_avatar_url::Request<'_>>,
) -> ConduitResult<get_avatar_url::Response> {
if body.user_id.server_name() != db.globals.server_name() {
let response = db
body: Ruma<get_avatar_url::v3::Request>,
) -> Result<get_avatar_url::v3::Response> {
if body.user_id.server_name() != services().globals.server_name() {
let response = services()
.sending
.send_federation_request(
&db.globals,
body.user_id.server_name(),
federation::query::get_profile_information::v1::Request {
user_id: &body.user_id,
field: Some(&ProfileField::AvatarUrl),
user_id: body.user_id.clone(),
field: Some(ProfileField::AvatarUrl),
},
)
.await?;
return Ok(get_avatar_url::Response {
return Ok(get_avatar_url::v3::Response {
avatar_url: response.avatar_url,
blurhash: response.blurhash,
}
.into());
});
}
Ok(get_avatar_url::Response {
avatar_url: db.users.avatar_url(&body.user_id)?,
blurhash: db.users.blurhash(&body.user_id)?,
}
.into())
Ok(get_avatar_url::v3::Response {
avatar_url: services().users.avatar_url(&body.user_id)?,
blurhash: services().users.blurhash(&body.user_id)?,
})
}
/// # `GET /_matrix/client/r0/profile/{userId}`
@ -308,37 +289,29 @@ pub async fn get_avatar_url_route(
/// Returns the displayname, avatar_url and blurhash of the user.
///
/// - If user is on another server: Fetches profile over federation
#[cfg_attr(
feature = "conduit_bin",
get("/_matrix/client/r0/profile/<_>", data = "<body>")
)]
#[tracing::instrument(skip(db, body))]
pub async fn get_profile_route(
db: DatabaseGuard,
body: Ruma<get_profile::Request<'_>>,
) -> ConduitResult<get_profile::Response> {
if body.user_id.server_name() != db.globals.server_name() {
let response = db
body: Ruma<get_profile::v3::Request>,
) -> Result<get_profile::v3::Response> {
if body.user_id.server_name() != services().globals.server_name() {
let response = services()
.sending
.send_federation_request(
&db.globals,
body.user_id.server_name(),
federation::query::get_profile_information::v1::Request {
user_id: &body.user_id,
user_id: body.user_id.clone(),
field: None,
},
)
.await?;
return Ok(get_profile::Response {
return Ok(get_profile::v3::Response {
displayname: response.displayname,
avatar_url: response.avatar_url,
blurhash: response.blurhash,
}
.into());
});
}
if !db.users.exists(&body.user_id)? {
if !services().users.exists(&body.user_id)? {
// Return 404 if this user doesn't exist
return Err(Error::BadRequest(
ErrorKind::NotFound,
@ -346,10 +319,9 @@ pub async fn get_profile_route(
));
}
Ok(get_profile::Response {
avatar_url: db.users.avatar_url(&body.user_id)?,
blurhash: db.users.blurhash(&body.user_id)?,
displayname: db.users.displayname(&body.user_id)?,
}
.into())
Ok(get_profile::v3::Response {
avatar_url: services().users.avatar_url(&body.user_id)?,
blurhash: services().users.blurhash(&body.user_id)?,
displayname: services().users.displayname(&body.user_id)?,
})
}

View file

@ -0,0 +1,432 @@
use crate::{services, Error, Result, Ruma};
use ruma::{
api::client::{
error::ErrorKind,
push::{
delete_pushrule, get_pushers, get_pushrule, get_pushrule_actions, get_pushrule_enabled,
get_pushrules_all, set_pusher, set_pushrule, set_pushrule_actions,
set_pushrule_enabled, RuleScope,
},
},
events::{push_rules::PushRulesEvent, GlobalAccountDataEventType},
push::{InsertPushRuleError, RemovePushRuleError},
};
/// # `GET /_matrix/client/r0/pushrules`
///
/// Retrieves the push rules event for this user.
pub async fn get_pushrules_all_route(
body: Ruma<get_pushrules_all::v3::Request>,
) -> Result<get_pushrules_all::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let event = services()
.account_data
.get(
None,
sender_user,
GlobalAccountDataEventType::PushRules.to_string().into(),
)?
.ok_or(Error::BadRequest(
ErrorKind::NotFound,
"PushRules event not found.",
))?;
let account_data = serde_json::from_str::<PushRulesEvent>(event.get())
.map_err(|_| Error::bad_database("Invalid account data event in db."))?
.content;
Ok(get_pushrules_all::v3::Response {
global: account_data.global,
})
}
/// # `GET /_matrix/client/r0/pushrules/{scope}/{kind}/{ruleId}`
///
/// Retrieves a single specified push rule for this user.
pub async fn get_pushrule_route(
body: Ruma<get_pushrule::v3::Request>,
) -> Result<get_pushrule::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let event = services()
.account_data
.get(
None,
sender_user,
GlobalAccountDataEventType::PushRules.to_string().into(),
)?
.ok_or(Error::BadRequest(
ErrorKind::NotFound,
"PushRules event not found.",
))?;
let account_data = serde_json::from_str::<PushRulesEvent>(event.get())
.map_err(|_| Error::bad_database("Invalid account data event in db."))?
.content;
let rule = account_data
.global
.get(body.kind.clone(), &body.rule_id)
.map(Into::into);
if let Some(rule) = rule {
Ok(get_pushrule::v3::Response { rule })
} else {
Err(Error::BadRequest(
ErrorKind::NotFound,
"Push rule not found.",
))
}
}
/// # `PUT /_matrix/client/r0/pushrules/{scope}/{kind}/{ruleId}`
///
/// Creates a single specified push rule for this user.
pub async fn set_pushrule_route(
body: Ruma<set_pushrule::v3::Request>,
) -> Result<set_pushrule::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let body = body.body;
if body.scope != RuleScope::Global {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Scopes other than 'global' are not supported.",
));
}
let event = services()
.account_data
.get(
None,
sender_user,
GlobalAccountDataEventType::PushRules.to_string().into(),
)?
.ok_or(Error::BadRequest(
ErrorKind::NotFound,
"PushRules event not found.",
))?;
let mut account_data = serde_json::from_str::<PushRulesEvent>(event.get())
.map_err(|_| Error::bad_database("Invalid account data event in db."))?;
if let Err(error) = account_data.content.global.insert(
body.rule.clone(),
body.after.as_deref(),
body.before.as_deref(),
) {
let err = match error {
InsertPushRuleError::ServerDefaultRuleId => Error::BadRequest(
ErrorKind::InvalidParam,
"Rule IDs starting with a dot are reserved for server-default rules.",
),
InsertPushRuleError::InvalidRuleId => Error::BadRequest(
ErrorKind::InvalidParam,
"Rule ID containing invalid characters.",
),
InsertPushRuleError::RelativeToServerDefaultRule => Error::BadRequest(
ErrorKind::InvalidParam,
"Can't place a push rule relatively to a server-default rule.",
),
InsertPushRuleError::UnknownRuleId => Error::BadRequest(
ErrorKind::NotFound,
"The before or after rule could not be found.",
),
InsertPushRuleError::BeforeHigherThanAfter => Error::BadRequest(
ErrorKind::InvalidParam,
"The before rule has a higher priority than the after rule.",
),
_ => Error::BadRequest(ErrorKind::InvalidParam, "Invalid data."),
};
return Err(err);
}
services().account_data.update(
None,
sender_user,
GlobalAccountDataEventType::PushRules.to_string().into(),
&serde_json::to_value(account_data).expect("to json value always works"),
)?;
Ok(set_pushrule::v3::Response {})
}
/// # `GET /_matrix/client/r0/pushrules/{scope}/{kind}/{ruleId}/actions`
///
/// Gets the actions of a single specified push rule for this user.
pub async fn get_pushrule_actions_route(
body: Ruma<get_pushrule_actions::v3::Request>,
) -> Result<get_pushrule_actions::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
if body.scope != RuleScope::Global {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Scopes other than 'global' are not supported.",
));
}
let event = services()
.account_data
.get(
None,
sender_user,
GlobalAccountDataEventType::PushRules.to_string().into(),
)?
.ok_or(Error::BadRequest(
ErrorKind::NotFound,
"PushRules event not found.",
))?;
let account_data = serde_json::from_str::<PushRulesEvent>(event.get())
.map_err(|_| Error::bad_database("Invalid account data event in db."))?
.content;
let global = account_data.global;
let actions = global
.get(body.kind.clone(), &body.rule_id)
.map(|rule| rule.actions().to_owned())
.ok_or(Error::BadRequest(
ErrorKind::NotFound,
"Push rule not found.",
))?;
Ok(get_pushrule_actions::v3::Response { actions })
}
/// # `PUT /_matrix/client/r0/pushrules/{scope}/{kind}/{ruleId}/actions`
///
/// Sets the actions of a single specified push rule for this user.
pub async fn set_pushrule_actions_route(
body: Ruma<set_pushrule_actions::v3::Request>,
) -> Result<set_pushrule_actions::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
if body.scope != RuleScope::Global {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Scopes other than 'global' are not supported.",
));
}
let event = services()
.account_data
.get(
None,
sender_user,
GlobalAccountDataEventType::PushRules.to_string().into(),
)?
.ok_or(Error::BadRequest(
ErrorKind::NotFound,
"PushRules event not found.",
))?;
let mut account_data = serde_json::from_str::<PushRulesEvent>(event.get())
.map_err(|_| Error::bad_database("Invalid account data event in db."))?;
if account_data
.content
.global
.set_actions(body.kind.clone(), &body.rule_id, body.actions.clone())
.is_err()
{
return Err(Error::BadRequest(
ErrorKind::NotFound,
"Push rule not found.",
));
}
services().account_data.update(
None,
sender_user,
GlobalAccountDataEventType::PushRules.to_string().into(),
&serde_json::to_value(account_data).expect("to json value always works"),
)?;
Ok(set_pushrule_actions::v3::Response {})
}
/// # `GET /_matrix/client/r0/pushrules/{scope}/{kind}/{ruleId}/enabled`
///
/// Gets the enabled status of a single specified push rule for this user.
pub async fn get_pushrule_enabled_route(
body: Ruma<get_pushrule_enabled::v3::Request>,
) -> Result<get_pushrule_enabled::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
if body.scope != RuleScope::Global {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Scopes other than 'global' are not supported.",
));
}
let event = services()
.account_data
.get(
None,
sender_user,
GlobalAccountDataEventType::PushRules.to_string().into(),
)?
.ok_or(Error::BadRequest(
ErrorKind::NotFound,
"PushRules event not found.",
))?;
let account_data = serde_json::from_str::<PushRulesEvent>(event.get())
.map_err(|_| Error::bad_database("Invalid account data event in db."))?;
let global = account_data.content.global;
let enabled = global
.get(body.kind.clone(), &body.rule_id)
.map(|r| r.enabled())
.ok_or(Error::BadRequest(
ErrorKind::NotFound,
"Push rule not found.",
))?;
Ok(get_pushrule_enabled::v3::Response { enabled })
}
/// # `PUT /_matrix/client/r0/pushrules/{scope}/{kind}/{ruleId}/enabled`
///
/// Sets the enabled status of a single specified push rule for this user.
pub async fn set_pushrule_enabled_route(
body: Ruma<set_pushrule_enabled::v3::Request>,
) -> Result<set_pushrule_enabled::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
if body.scope != RuleScope::Global {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Scopes other than 'global' are not supported.",
));
}
let event = services()
.account_data
.get(
None,
sender_user,
GlobalAccountDataEventType::PushRules.to_string().into(),
)?
.ok_or(Error::BadRequest(
ErrorKind::NotFound,
"PushRules event not found.",
))?;
let mut account_data = serde_json::from_str::<PushRulesEvent>(event.get())
.map_err(|_| Error::bad_database("Invalid account data event in db."))?;
if account_data
.content
.global
.set_enabled(body.kind.clone(), &body.rule_id, body.enabled)
.is_err()
{
return Err(Error::BadRequest(
ErrorKind::NotFound,
"Push rule not found.",
));
}
services().account_data.update(
None,
sender_user,
GlobalAccountDataEventType::PushRules.to_string().into(),
&serde_json::to_value(account_data).expect("to json value always works"),
)?;
Ok(set_pushrule_enabled::v3::Response {})
}
/// # `DELETE /_matrix/client/r0/pushrules/{scope}/{kind}/{ruleId}`
///
/// Deletes a single specified push rule for this user.
pub async fn delete_pushrule_route(
body: Ruma<delete_pushrule::v3::Request>,
) -> Result<delete_pushrule::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
if body.scope != RuleScope::Global {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Scopes other than 'global' are not supported.",
));
}
let event = services()
.account_data
.get(
None,
sender_user,
GlobalAccountDataEventType::PushRules.to_string().into(),
)?
.ok_or(Error::BadRequest(
ErrorKind::NotFound,
"PushRules event not found.",
))?;
let mut account_data = serde_json::from_str::<PushRulesEvent>(event.get())
.map_err(|_| Error::bad_database("Invalid account data event in db."))?;
if let Err(error) = account_data
.content
.global
.remove(body.kind.clone(), &body.rule_id)
{
let err = match error {
RemovePushRuleError::ServerDefault => Error::BadRequest(
ErrorKind::InvalidParam,
"Cannot delete a server-default pushrule.",
),
RemovePushRuleError::NotFound => {
Error::BadRequest(ErrorKind::NotFound, "Push rule not found.")
}
_ => Error::BadRequest(ErrorKind::InvalidParam, "Invalid data."),
};
return Err(err);
}
services().account_data.update(
None,
sender_user,
GlobalAccountDataEventType::PushRules.to_string().into(),
&serde_json::to_value(account_data).expect("to json value always works"),
)?;
Ok(delete_pushrule::v3::Response {})
}
/// # `GET /_matrix/client/r0/pushers`
///
/// Gets all currently active pushers for the sender user.
pub async fn get_pushers_route(
body: Ruma<get_pushers::v3::Request>,
) -> Result<get_pushers::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
Ok(get_pushers::v3::Response {
pushers: services().pusher.get_pushers(sender_user)?,
})
}
/// # `POST /_matrix/client/r0/pushers/set`
///
/// Adds a pusher for the sender user.
///
/// - TODO: Handle `append`
pub async fn set_pushers_route(
body: Ruma<set_pusher::v3::Request>,
) -> Result<set_pusher::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
services()
.pusher
.set_pusher(sender_user, body.action.clone())?;
Ok(set_pusher::v3::Response::default())
}

View file

@ -0,0 +1,182 @@
use crate::{service::rooms::timeline::PduCount, services, Error, Result, Ruma};
use ruma::{
api::client::{error::ErrorKind, read_marker::set_read_marker, receipt::create_receipt},
events::{
receipt::{ReceiptThread, ReceiptType},
RoomAccountDataEventType,
},
MilliSecondsSinceUnixEpoch,
};
use std::collections::BTreeMap;
/// # `POST /_matrix/client/r0/rooms/{roomId}/read_markers`
///
/// Sets different types of read markers.
///
/// - Updates fully-read account data event to `fully_read`
/// - If `read_receipt` is set: Update private marker and public read receipt EDU
pub async fn set_read_marker_route(
body: Ruma<set_read_marker::v3::Request>,
) -> Result<set_read_marker::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
if let Some(fully_read) = &body.fully_read {
let fully_read_event = ruma::events::fully_read::FullyReadEvent {
content: ruma::events::fully_read::FullyReadEventContent {
event_id: fully_read.clone(),
},
};
services().account_data.update(
Some(&body.room_id),
sender_user,
RoomAccountDataEventType::FullyRead,
&serde_json::to_value(fully_read_event).expect("to json value always works"),
)?;
}
if body.private_read_receipt.is_some() || body.read_receipt.is_some() {
services()
.rooms
.user
.reset_notification_counts(sender_user, &body.room_id)?;
}
if let Some(event) = &body.private_read_receipt {
let count = services()
.rooms
.timeline
.get_pdu_count(event)?
.ok_or(Error::BadRequest(
ErrorKind::InvalidParam,
"Event does not exist.",
))?;
let count = match count {
PduCount::Backfilled(_) => {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Read receipt is in backfilled timeline",
))
}
PduCount::Normal(c) => c,
};
services()
.rooms
.edus
.read_receipt
.private_read_set(&body.room_id, sender_user, count)?;
}
if let Some(event) = &body.read_receipt {
let mut user_receipts = BTreeMap::new();
user_receipts.insert(
sender_user.clone(),
ruma::events::receipt::Receipt {
ts: Some(MilliSecondsSinceUnixEpoch::now()),
thread: ReceiptThread::Unthreaded,
},
);
let mut receipts = BTreeMap::new();
receipts.insert(ReceiptType::Read, user_receipts);
let mut receipt_content = BTreeMap::new();
receipt_content.insert(event.to_owned(), receipts);
services().rooms.edus.read_receipt.readreceipt_update(
sender_user,
&body.room_id,
ruma::events::receipt::ReceiptEvent {
content: ruma::events::receipt::ReceiptEventContent(receipt_content),
room_id: body.room_id.clone(),
},
)?;
}
Ok(set_read_marker::v3::Response {})
}
/// # `POST /_matrix/client/r0/rooms/{roomId}/receipt/{receiptType}/{eventId}`
///
/// Sets private read marker and public read receipt EDU.
pub async fn create_receipt_route(
body: Ruma<create_receipt::v3::Request>,
) -> Result<create_receipt::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
if matches!(
&body.receipt_type,
create_receipt::v3::ReceiptType::Read | create_receipt::v3::ReceiptType::ReadPrivate
) {
services()
.rooms
.user
.reset_notification_counts(sender_user, &body.room_id)?;
}
match body.receipt_type {
create_receipt::v3::ReceiptType::FullyRead => {
let fully_read_event = ruma::events::fully_read::FullyReadEvent {
content: ruma::events::fully_read::FullyReadEventContent {
event_id: body.event_id.clone(),
},
};
services().account_data.update(
Some(&body.room_id),
sender_user,
RoomAccountDataEventType::FullyRead,
&serde_json::to_value(fully_read_event).expect("to json value always works"),
)?;
}
create_receipt::v3::ReceiptType::Read => {
let mut user_receipts = BTreeMap::new();
user_receipts.insert(
sender_user.clone(),
ruma::events::receipt::Receipt {
ts: Some(MilliSecondsSinceUnixEpoch::now()),
thread: ReceiptThread::Unthreaded,
},
);
let mut receipts = BTreeMap::new();
receipts.insert(ReceiptType::Read, user_receipts);
let mut receipt_content = BTreeMap::new();
receipt_content.insert(body.event_id.to_owned(), receipts);
services().rooms.edus.read_receipt.readreceipt_update(
sender_user,
&body.room_id,
ruma::events::receipt::ReceiptEvent {
content: ruma::events::receipt::ReceiptEventContent(receipt_content),
room_id: body.room_id.clone(),
},
)?;
}
create_receipt::v3::ReceiptType::ReadPrivate => {
let count = services()
.rooms
.timeline
.get_pdu_count(&body.event_id)?
.ok_or(Error::BadRequest(
ErrorKind::InvalidParam,
"Event does not exist.",
))?;
let count = match count {
PduCount::Backfilled(_) => {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Read receipt is in backfilled timeline",
))
}
PduCount::Normal(c) => c,
};
services().rooms.edus.read_receipt.private_read_set(
&body.room_id,
sender_user,
count,
)?;
}
_ => return Err(Error::bad_database("Unsupported receipt type")),
}
Ok(create_receipt::v3::Response {})
}

View file

@ -0,0 +1,59 @@
use std::sync::Arc;
use crate::{service::pdu::PduBuilder, services, Result, Ruma};
use ruma::{
api::client::redact::redact_event,
events::{room::redaction::RoomRedactionEventContent, TimelineEventType},
};
use serde_json::value::to_raw_value;
/// # `PUT /_matrix/client/r0/rooms/{roomId}/redact/{eventId}/{txnId}`
///
/// Tries to send a redaction event into the room.
///
/// - TODO: Handle txn id
pub async fn redact_event_route(
body: Ruma<redact_event::v3::Request>,
) -> Result<redact_event::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let body = body.body;
let mutex_state = Arc::clone(
services()
.globals
.roomid_mutex_state
.write()
.await
.entry(body.room_id.clone())
.or_default(),
);
let state_lock = mutex_state.lock().await;
let event_id = services()
.rooms
.timeline
.build_and_append_pdu(
PduBuilder {
event_type: TimelineEventType::RoomRedaction,
content: to_raw_value(&RoomRedactionEventContent {
redacts: Some(body.event_id.clone()),
reason: body.reason.clone(),
})
.expect("event is valid, we just created it"),
unsigned: None,
state_key: None,
redacts: Some(body.event_id.into()),
timestamp: None,
},
sender_user,
&body.room_id,
&state_lock,
)
.await?;
drop(state_lock);
let event_id = (*event_id).to_owned();
Ok(redact_event::v3::Response { event_id })
}

View file

@ -0,0 +1,91 @@
use ruma::api::client::relations::{
get_relating_events, get_relating_events_with_rel_type,
get_relating_events_with_rel_type_and_event_type,
};
use crate::{services, Result, Ruma};
/// # `GET /_matrix/client/r0/rooms/{roomId}/relations/{eventId}/{relType}/{eventType}`
pub async fn get_relating_events_with_rel_type_and_event_type_route(
body: Ruma<get_relating_events_with_rel_type_and_event_type::v1::Request>,
) -> Result<get_relating_events_with_rel_type_and_event_type::v1::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let res = services()
.rooms
.pdu_metadata
.paginate_relations_with_filter(
sender_user,
&body.room_id,
&body.event_id,
Some(body.event_type.clone()),
Some(body.rel_type.clone()),
body.from.clone(),
body.to.clone(),
body.limit,
body.recurse,
&body.dir,
)?;
Ok(
get_relating_events_with_rel_type_and_event_type::v1::Response {
chunk: res.chunk,
next_batch: res.next_batch,
prev_batch: res.prev_batch,
recursion_depth: res.recursion_depth,
},
)
}
/// # `GET /_matrix/client/r0/rooms/{roomId}/relations/{eventId}/{relType}`
pub async fn get_relating_events_with_rel_type_route(
body: Ruma<get_relating_events_with_rel_type::v1::Request>,
) -> Result<get_relating_events_with_rel_type::v1::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let res = services()
.rooms
.pdu_metadata
.paginate_relations_with_filter(
sender_user,
&body.room_id,
&body.event_id,
None,
Some(body.rel_type.clone()),
body.from.clone(),
body.to.clone(),
body.limit,
body.recurse,
&body.dir,
)?;
Ok(get_relating_events_with_rel_type::v1::Response {
chunk: res.chunk,
next_batch: res.next_batch,
prev_batch: res.prev_batch,
recursion_depth: res.recursion_depth,
})
}
/// # `GET /_matrix/client/r0/rooms/{roomId}/relations/{eventId}`
pub async fn get_relating_events_route(
body: Ruma<get_relating_events::v1::Request>,
) -> Result<get_relating_events::v1::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
services()
.rooms
.pdu_metadata
.paginate_relations_with_filter(
sender_user,
&body.room_id,
&body.event_id,
None,
None,
body.from.clone(),
body.to.clone(),
body.limit,
body.recurse,
&body.dir,
)
}

View file

@ -0,0 +1,69 @@
use crate::{services, utils::HtmlEscape, Error, Result, Ruma};
use ruma::{
api::client::{error::ErrorKind, room::report_content},
events::room::message,
int,
};
/// # `POST /_matrix/client/r0/rooms/{roomId}/report/{eventId}`
///
/// Reports an inappropriate event to homeserver admins
///
pub async fn report_event_route(
body: Ruma<report_content::v3::Request>,
) -> Result<report_content::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let pdu = match services().rooms.timeline.get_pdu(&body.event_id)? {
Some(pdu) => pdu,
_ => {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Invalid Event ID",
))
}
};
if let Some(true) = body.score.map(|s| s > int!(0) || s < int!(-100)) {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Invalid score, must be within 0 to -100",
));
};
if let Some(true) = body.reason.clone().map(|s| s.chars().count() > 250) {
return Err(Error::BadRequest(
ErrorKind::InvalidParam,
"Reason too long, should be 250 characters or fewer",
));
};
services().admin
.send_message(message::RoomMessageEventContent::text_html(
format!(
"Report received from: {}\n\n\
Event ID: {:?}\n\
Room ID: {:?}\n\
Sent By: {:?}\n\n\
Report Score: {:?}\n\
Report Reason: {:?}",
sender_user, pdu.event_id, pdu.room_id, pdu.sender, body.score, body.reason
),
format!(
"<details><summary>Report received from: <a href=\"https://matrix.to/#/{0:?}\">{0:?}\
</a></summary><ul><li>Event Info<ul><li>Event ID: <code>{1:?}</code>\
<a href=\"https://matrix.to/#/{2:?}/{1:?}\">🔗</a></li><li>Room ID: <code>{2:?}</code>\
</li><li>Sent By: <a href=\"https://matrix.to/#/{3:?}\">{3:?}</a></li></ul></li><li>\
Report Info<ul><li>Report Score: {4:?}</li><li>Report Reason: {5}</li></ul></li>\
</ul></details>",
sender_user,
pdu.event_id,
pdu.room_id,
pdu.sender,
body.score,
HtmlEscape(body.reason.as_deref().unwrap_or(""))
),
));
Ok(report_content::v3::Response {})
}

View file

@ -0,0 +1,878 @@
use crate::{
api::client_server::invite_helper, service::pdu::PduBuilder, services, Error, Result, Ruma,
};
use ruma::{
api::client::{
error::ErrorKind,
room::{self, aliases, create_room, get_room_event, upgrade_room},
},
events::{
room::{
canonical_alias::RoomCanonicalAliasEventContent,
create::RoomCreateEventContent,
guest_access::{GuestAccess, RoomGuestAccessEventContent},
history_visibility::{HistoryVisibility, RoomHistoryVisibilityEventContent},
join_rules::{JoinRule, RoomJoinRulesEventContent},
member::{MembershipState, RoomMemberEventContent},
name::RoomNameEventContent,
power_levels::RoomPowerLevelsEventContent,
tombstone::RoomTombstoneEventContent,
topic::RoomTopicEventContent,
},
StateEventType, TimelineEventType,
},
int,
serde::JsonObject,
CanonicalJsonObject, OwnedRoomAliasId, RoomAliasId, RoomId, RoomVersionId,
};
use serde_json::{json, value::to_raw_value};
use std::{cmp::max, collections::BTreeMap, sync::Arc};
use tracing::{info, warn};
/// # `POST /_matrix/client/r0/createRoom`
///
/// Creates a new room.
///
/// - Room ID is randomly generated
/// - Create alias if room_alias_name is set
/// - Send create event
/// - Join sender user
/// - Send power levels event
/// - Send canonical room alias
/// - Send join rules
/// - Send history visibility
/// - Send guest access
/// - Send events listed in initial state
/// - Send events implied by `name` and `topic`
/// - Send invite events
pub async fn create_room_route(
body: Ruma<create_room::v3::Request>,
) -> Result<create_room::v3::Response> {
use create_room::v3::RoomPreset;
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let room_id = RoomId::new(services().globals.server_name());
services().rooms.short.get_or_create_shortroomid(&room_id)?;
let mutex_state = Arc::clone(
services()
.globals
.roomid_mutex_state
.write()
.await
.entry(room_id.clone())
.or_default(),
);
let state_lock = mutex_state.lock().await;
if !services().globals.allow_room_creation()
&& body.appservice_info.is_none()
&& !services().users.is_admin(sender_user)?
{
return Err(Error::BadRequest(
ErrorKind::forbidden(),
"Room creation has been disabled.",
));
}
let alias: Option<OwnedRoomAliasId> =
body.room_alias_name
.as_ref()
.map_or(Ok(None), |localpart| {
// TODO: Check for invalid characters and maximum length
let alias = RoomAliasId::parse(format!(
"#{}:{}",
localpart,
services().globals.server_name()
))
.map_err(|_| Error::BadRequest(ErrorKind::InvalidParam, "Invalid alias."))?;
if services()
.rooms
.alias
.resolve_local_alias(&alias)?
.is_some()
{
Err(Error::BadRequest(
ErrorKind::RoomInUse,
"Room alias already exists.",
))
} else {
Ok(Some(alias))
}
})?;
if let Some(ref alias) = alias {
if let Some(ref info) = body.appservice_info {
if !info.aliases.is_match(alias.as_str()) {
return Err(Error::BadRequest(
ErrorKind::Exclusive,
"Room alias is not in namespace.",
));
}
} else if services().appservice.is_exclusive_alias(alias).await {
return Err(Error::BadRequest(
ErrorKind::Exclusive,
"Room alias reserved by appservice.",
));
}
}
let room_version = match body.room_version.clone() {
Some(room_version) => {
if services()
.globals
.supported_room_versions()
.contains(&room_version)
{
room_version
} else {
return Err(Error::BadRequest(
ErrorKind::UnsupportedRoomVersion,
"This server does not support that room version.",
));
}
}
None => services().globals.default_room_version(),
};
let content = match &body.creation_content {
Some(content) => {
let mut content = content
.deserialize_as::<CanonicalJsonObject>()
.expect("Invalid creation content");
match room_version {
RoomVersionId::V1
| RoomVersionId::V2
| RoomVersionId::V3
| RoomVersionId::V4
| RoomVersionId::V5
| RoomVersionId::V6
| RoomVersionId::V7
| RoomVersionId::V8
| RoomVersionId::V9
| RoomVersionId::V10 => {
content.insert(
"creator".into(),
json!(&sender_user).try_into().map_err(|_| {
Error::BadRequest(ErrorKind::BadJson, "Invalid creation content")
})?,
);
}
RoomVersionId::V11 => {} // V11 removed the "creator" key
_ => unreachable!("Validity of room version already checked"),
}
content.insert(
"room_version".into(),
json!(room_version.as_str()).try_into().map_err(|_| {
Error::BadRequest(ErrorKind::BadJson, "Invalid creation content")
})?,
);
content
}
None => {
let content = match room_version {
RoomVersionId::V1
| RoomVersionId::V2
| RoomVersionId::V3
| RoomVersionId::V4
| RoomVersionId::V5
| RoomVersionId::V6
| RoomVersionId::V7
| RoomVersionId::V8
| RoomVersionId::V9
| RoomVersionId::V10 => RoomCreateEventContent::new_v1(sender_user.clone()),
RoomVersionId::V11 => RoomCreateEventContent::new_v11(),
_ => unreachable!("Validity of room version already checked"),
};
let mut content = serde_json::from_str::<CanonicalJsonObject>(
to_raw_value(&content)
.map_err(|_| Error::BadRequest(ErrorKind::BadJson, "Invalid creation content"))?
.get(),
)
.unwrap();
content.insert(
"room_version".into(),
json!(room_version.as_str()).try_into().map_err(|_| {
Error::BadRequest(ErrorKind::BadJson, "Invalid creation content")
})?,
);
content
}
};
// Validate creation content
let de_result = serde_json::from_str::<CanonicalJsonObject>(
to_raw_value(&content)
.expect("Invalid creation content")
.get(),
);
if de_result.is_err() {
return Err(Error::BadRequest(
ErrorKind::BadJson,
"Invalid creation content",
));
}
// 1. The room create event
services()
.rooms
.timeline
.build_and_append_pdu(
PduBuilder {
event_type: TimelineEventType::RoomCreate,
content: to_raw_value(&content).expect("event is valid, we just created it"),
unsigned: None,
state_key: Some("".to_owned()),
redacts: None,
timestamp: None,
},
sender_user,
&room_id,
&state_lock,
)
.await?;
// 2. Let the room creator join
services()
.rooms
.timeline
.build_and_append_pdu(
PduBuilder {
event_type: TimelineEventType::RoomMember,
content: to_raw_value(&RoomMemberEventContent {
membership: MembershipState::Join,
displayname: services().users.displayname(sender_user)?,
avatar_url: services().users.avatar_url(sender_user)?,
is_direct: Some(body.is_direct),
third_party_invite: None,
blurhash: services().users.blurhash(sender_user)?,
reason: None,
join_authorized_via_users_server: None,
})
.expect("event is valid, we just created it"),
unsigned: None,
state_key: Some(sender_user.to_string()),
redacts: None,
timestamp: None,
},
sender_user,
&room_id,
&state_lock,
)
.await?;
// 3. Power levels
// Figure out preset. We need it for preset specific events
let preset = body.preset.clone().unwrap_or(match &body.visibility {
room::Visibility::Private => RoomPreset::PrivateChat,
room::Visibility::Public => RoomPreset::PublicChat,
_ => RoomPreset::PrivateChat, // Room visibility should not be custom
});
let mut users = BTreeMap::new();
users.insert(sender_user.clone(), int!(100));
if preset == RoomPreset::TrustedPrivateChat {
for invite_ in &body.invite {
users.insert(invite_.clone(), int!(100));
}
}
let mut power_levels_content = serde_json::to_value(RoomPowerLevelsEventContent {
users,
..Default::default()
})
.expect("event is valid, we just created it");
if let Some(power_level_content_override) = &body.power_level_content_override {
let json: JsonObject = serde_json::from_str(power_level_content_override.json().get())
.map_err(|_| {
Error::BadRequest(ErrorKind::BadJson, "Invalid power_level_content_override.")
})?;
for (key, value) in json {
power_levels_content[key] = value;
}
}
services()
.rooms
.timeline
.build_and_append_pdu(
PduBuilder {
event_type: TimelineEventType::RoomPowerLevels,
content: to_raw_value(&power_levels_content)
.expect("to_raw_value always works on serde_json::Value"),
unsigned: None,
state_key: Some("".to_owned()),
redacts: None,
timestamp: None,
},
sender_user,
&room_id,
&state_lock,
)
.await?;
// 4. Canonical room alias
if let Some(room_alias_id) = &alias {
services()
.rooms
.timeline
.build_and_append_pdu(
PduBuilder {
event_type: TimelineEventType::RoomCanonicalAlias,
content: to_raw_value(&RoomCanonicalAliasEventContent {
alias: Some(room_alias_id.to_owned()),
alt_aliases: vec![],
})
.expect("We checked that alias earlier, it must be fine"),
unsigned: None,
state_key: Some("".to_owned()),
redacts: None,
timestamp: None,
},
sender_user,
&room_id,
&state_lock,
)
.await?;
}
// 5. Events set by preset
// 5.1 Join Rules
services()
.rooms
.timeline
.build_and_append_pdu(
PduBuilder {
event_type: TimelineEventType::RoomJoinRules,
content: to_raw_value(&RoomJoinRulesEventContent::new(match preset {
RoomPreset::PublicChat => JoinRule::Public,
// according to spec "invite" is the default
_ => JoinRule::Invite,
}))
.expect("event is valid, we just created it"),
unsigned: None,
state_key: Some("".to_owned()),
redacts: None,
timestamp: None,
},
sender_user,
&room_id,
&state_lock,
)
.await?;
// 5.2 History Visibility
services()
.rooms
.timeline
.build_and_append_pdu(
PduBuilder {
event_type: TimelineEventType::RoomHistoryVisibility,
content: to_raw_value(&RoomHistoryVisibilityEventContent::new(
HistoryVisibility::Shared,
))
.expect("event is valid, we just created it"),
unsigned: None,
state_key: Some("".to_owned()),
redacts: None,
timestamp: None,
},
sender_user,
&room_id,
&state_lock,
)
.await?;
// 5.3 Guest Access
services()
.rooms
.timeline
.build_and_append_pdu(
PduBuilder {
event_type: TimelineEventType::RoomGuestAccess,
content: to_raw_value(&RoomGuestAccessEventContent::new(match preset {
RoomPreset::PublicChat => GuestAccess::Forbidden,
_ => GuestAccess::CanJoin,
}))
.expect("event is valid, we just created it"),
unsigned: None,
state_key: Some("".to_owned()),
redacts: None,
timestamp: None,
},
sender_user,
&room_id,
&state_lock,
)
.await?;
// 6. Events listed in initial_state
for event in &body.initial_state {
let mut pdu_builder = event.deserialize_as::<PduBuilder>().map_err(|e| {
warn!("Invalid initial state event: {:?}", e);
Error::BadRequest(ErrorKind::InvalidParam, "Invalid initial state event.")
})?;
// Implicit state key defaults to ""
pdu_builder.state_key.get_or_insert_with(|| "".to_owned());
// Silently skip encryption events if they are not allowed
if pdu_builder.event_type == TimelineEventType::RoomEncryption
&& !services().globals.allow_encryption()
{
continue;
}
services()
.rooms
.timeline
.build_and_append_pdu(pdu_builder, sender_user, &room_id, &state_lock)
.await?;
}
// 7. Events implied by name and topic
if let Some(name) = &body.name {
services()
.rooms
.timeline
.build_and_append_pdu(
PduBuilder {
event_type: TimelineEventType::RoomName,
content: to_raw_value(&RoomNameEventContent::new(name.clone()))
.expect("event is valid, we just created it"),
unsigned: None,
state_key: Some("".to_owned()),
redacts: None,
timestamp: None,
},
sender_user,
&room_id,
&state_lock,
)
.await?;
}
if let Some(topic) = &body.topic {
services()
.rooms
.timeline
.build_and_append_pdu(
PduBuilder {
event_type: TimelineEventType::RoomTopic,
content: to_raw_value(&RoomTopicEventContent {
topic: topic.clone(),
})
.expect("event is valid, we just created it"),
unsigned: None,
state_key: Some("".to_owned()),
redacts: None,
timestamp: None,
},
sender_user,
&room_id,
&state_lock,
)
.await?;
}
// 8. Events implied by invite (and TODO: invite_3pid)
drop(state_lock);
for user_id in &body.invite {
let _ = invite_helper(sender_user, user_id, &room_id, None, body.is_direct).await;
}
// Homeserver specific stuff
if let Some(alias) = alias {
services()
.rooms
.alias
.set_alias(&alias, &room_id, sender_user)?;
}
if body.visibility == room::Visibility::Public {
services().rooms.directory.set_public(&room_id)?;
}
info!("{} created a room", sender_user);
Ok(create_room::v3::Response::new(room_id))
}
/// # `GET /_matrix/client/r0/rooms/{roomId}/event/{eventId}`
///
/// Gets a single event.
///
/// - You have to currently be joined to the room (TODO: Respect history visibility)
pub async fn get_room_event_route(
body: Ruma<get_room_event::v3::Request>,
) -> Result<get_room_event::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let event = services()
.rooms
.timeline
.get_pdu(&body.event_id)?
.ok_or_else(|| {
warn!("Event not found, event ID: {:?}", &body.event_id);
Error::BadRequest(ErrorKind::NotFound, "Event not found.")
})?;
if !services().rooms.state_accessor.user_can_see_event(
sender_user,
&event.room_id,
&body.event_id,
)? {
return Err(Error::BadRequest(
ErrorKind::forbidden(),
"You don't have permission to view this event.",
));
}
let mut event = (*event).clone();
event.add_age()?;
Ok(get_room_event::v3::Response {
event: event.to_room_event(),
})
}
/// # `GET /_matrix/client/r0/rooms/{roomId}/aliases`
///
/// Lists all aliases of the room.
///
/// - Only users joined to the room are allowed to call this TODO: Allow any user to call it if history_visibility is world readable
pub async fn get_room_aliases_route(
body: Ruma<aliases::v3::Request>,
) -> Result<aliases::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
if !services()
.rooms
.state_cache
.is_joined(sender_user, &body.room_id)?
{
return Err(Error::BadRequest(
ErrorKind::forbidden(),
"You don't have permission to view this room.",
));
}
Ok(aliases::v3::Response {
aliases: services()
.rooms
.alias
.local_aliases_for_room(&body.room_id)
.filter_map(|a| a.ok())
.collect(),
})
}
/// # `POST /_matrix/client/r0/rooms/{roomId}/upgrade`
///
/// Upgrades the room.
///
/// - Creates a replacement room
/// - Sends a tombstone event into the current room
/// - Sender user joins the room
/// - Transfers some state events
/// - Moves local aliases
/// - Modifies old room power levels to prevent users from speaking
pub async fn upgrade_room_route(
body: Ruma<upgrade_room::v3::Request>,
) -> Result<upgrade_room::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
if !services()
.globals
.supported_room_versions()
.contains(&body.new_version)
{
return Err(Error::BadRequest(
ErrorKind::UnsupportedRoomVersion,
"This server does not support that room version.",
));
}
// Create a replacement room
let replacement_room = RoomId::new(services().globals.server_name());
services()
.rooms
.short
.get_or_create_shortroomid(&replacement_room)?;
let mutex_state = Arc::clone(
services()
.globals
.roomid_mutex_state
.write()
.await
.entry(body.room_id.clone())
.or_default(),
);
let state_lock = mutex_state.lock().await;
// Send a m.room.tombstone event to the old room to indicate that it is not intended to be used any further
// Fail if the sender does not have the required permissions
let tombstone_event_id = services()
.rooms
.timeline
.build_and_append_pdu(
PduBuilder {
event_type: TimelineEventType::RoomTombstone,
content: to_raw_value(&RoomTombstoneEventContent {
body: "This room has been replaced".to_owned(),
replacement_room: replacement_room.clone(),
})
.expect("event is valid, we just created it"),
unsigned: None,
state_key: Some("".to_owned()),
redacts: None,
timestamp: None,
},
sender_user,
&body.room_id,
&state_lock,
)
.await?;
// Change lock to replacement room
drop(state_lock);
let mutex_state = Arc::clone(
services()
.globals
.roomid_mutex_state
.write()
.await
.entry(replacement_room.clone())
.or_default(),
);
let state_lock = mutex_state.lock().await;
// Get the old room creation event
let mut create_event_content = serde_json::from_str::<CanonicalJsonObject>(
services()
.rooms
.state_accessor
.room_state_get(&body.room_id, &StateEventType::RoomCreate, "")?
.ok_or_else(|| Error::bad_database("Found room without m.room.create event."))?
.content
.get(),
)
.map_err(|_| Error::bad_database("Invalid room event in database."))?;
// Use the m.room.tombstone event as the predecessor
let predecessor = Some(ruma::events::room::create::PreviousRoom::new(
body.room_id.clone(),
(*tombstone_event_id).to_owned(),
));
// Send a m.room.create event containing a predecessor field and the applicable room_version
match body.new_version {
RoomVersionId::V1
| RoomVersionId::V2
| RoomVersionId::V3
| RoomVersionId::V4
| RoomVersionId::V5
| RoomVersionId::V6
| RoomVersionId::V7
| RoomVersionId::V8
| RoomVersionId::V9
| RoomVersionId::V10 => {
create_event_content.insert(
"creator".into(),
json!(&sender_user).try_into().map_err(|_| {
Error::BadRequest(ErrorKind::BadJson, "Error forming creation event")
})?,
);
}
RoomVersionId::V11 => {
// "creator" key no longer exists in V11 rooms
create_event_content.remove("creator");
}
_ => unreachable!("Validity of room version already checked"),
}
create_event_content.insert(
"room_version".into(),
json!(&body.new_version)
.try_into()
.map_err(|_| Error::BadRequest(ErrorKind::BadJson, "Error forming creation event"))?,
);
create_event_content.insert(
"predecessor".into(),
json!(predecessor)
.try_into()
.map_err(|_| Error::BadRequest(ErrorKind::BadJson, "Error forming creation event"))?,
);
// Validate creation event content
let de_result = serde_json::from_str::<CanonicalJsonObject>(
to_raw_value(&create_event_content)
.expect("Error forming creation event")
.get(),
);
if de_result.is_err() {
return Err(Error::BadRequest(
ErrorKind::BadJson,
"Error forming creation event",
));
}
services()
.rooms
.timeline
.build_and_append_pdu(
PduBuilder {
event_type: TimelineEventType::RoomCreate,
content: to_raw_value(&create_event_content)
.expect("event is valid, we just created it"),
unsigned: None,
state_key: Some("".to_owned()),
redacts: None,
timestamp: None,
},
sender_user,
&replacement_room,
&state_lock,
)
.await?;
// Join the new room
services()
.rooms
.timeline
.build_and_append_pdu(
PduBuilder {
event_type: TimelineEventType::RoomMember,
content: to_raw_value(&RoomMemberEventContent {
membership: MembershipState::Join,
displayname: services().users.displayname(sender_user)?,
avatar_url: services().users.avatar_url(sender_user)?,
is_direct: None,
third_party_invite: None,
blurhash: services().users.blurhash(sender_user)?,
reason: None,
join_authorized_via_users_server: None,
})
.expect("event is valid, we just created it"),
unsigned: None,
state_key: Some(sender_user.to_string()),
redacts: None,
timestamp: None,
},
sender_user,
&replacement_room,
&state_lock,
)
.await?;
// Recommended transferable state events list from the specs
let transferable_state_events = vec![
StateEventType::RoomServerAcl,
StateEventType::RoomEncryption,
StateEventType::RoomName,
StateEventType::RoomAvatar,
StateEventType::RoomTopic,
StateEventType::RoomGuestAccess,
StateEventType::RoomHistoryVisibility,
StateEventType::RoomJoinRules,
StateEventType::RoomPowerLevels,
];
// Replicate transferable state events to the new room
for event_type in transferable_state_events {
let event_content =
match services()
.rooms
.state_accessor
.room_state_get(&body.room_id, &event_type, "")?
{
Some(v) => v.content.clone(),
None => continue, // Skipping missing events.
};
services()
.rooms
.timeline
.build_and_append_pdu(
PduBuilder {
event_type: event_type.to_string().into(),
content: event_content,
unsigned: None,
state_key: Some("".to_owned()),
redacts: None,
timestamp: None,
},
sender_user,
&replacement_room,
&state_lock,
)
.await?;
}
// Moves any local aliases to the new room
for alias in services()
.rooms
.alias
.local_aliases_for_room(&body.room_id)
.filter_map(|r| r.ok())
{
services()
.rooms
.alias
.set_alias(&alias, &replacement_room, sender_user)?;
}
// Get the old room power levels
let mut power_levels_event_content: RoomPowerLevelsEventContent = serde_json::from_str(
services()
.rooms
.state_accessor
.room_state_get(&body.room_id, &StateEventType::RoomPowerLevels, "")?
.ok_or_else(|| Error::bad_database("Found room without m.room.create event."))?
.content
.get(),
)
.map_err(|_| Error::bad_database("Invalid room event in database."))?;
// Setting events_default and invite to the greater of 50 and users_default + 1
let new_level = max(int!(50), power_levels_event_content.users_default + int!(1));
power_levels_event_content.events_default = new_level;
power_levels_event_content.invite = new_level;
// Modify the power levels in the old room to prevent sending of events and inviting new users
let _ = services()
.rooms
.timeline
.build_and_append_pdu(
PduBuilder {
event_type: TimelineEventType::RoomPowerLevels,
content: to_raw_value(&power_levels_event_content)
.expect("event is valid, we just created it"),
unsigned: None,
state_key: Some("".to_owned()),
redacts: None,
timestamp: None,
},
sender_user,
&body.room_id,
&state_lock,
)
.await?;
drop(state_lock);
// Return the replacement room id
Ok(upgrade_room::v3::Response { replacement_room })
}

View file

@ -1,9 +1,12 @@
use crate::{database::DatabaseGuard, ConduitResult, Error, Ruma};
use ruma::api::client::{error::ErrorKind, r0::search::search_events};
use crate::{services, Error, Result, Ruma};
use ruma::api::client::{
error::ErrorKind,
search::search_events::{
self,
v3::{EventContextResult, ResultCategories, ResultRoomEvents, SearchResult},
},
};
#[cfg(feature = "conduit_bin")]
use rocket::post;
use search_events::{EventContextResult, ResultCategories, ResultRoomEvents, SearchResult};
use std::collections::BTreeMap;
/// # `POST /_matrix/client/r0/search`
@ -11,44 +14,47 @@ use std::collections::BTreeMap;
/// Searches rooms for messages.
///
/// - Only works if the user is currently joined to the room (TODO: Respect history visibility)
#[cfg_attr(
feature = "conduit_bin",
post("/_matrix/client/r0/search", data = "<body>")
)]
#[tracing::instrument(skip(db, body))]
pub async fn search_events_route(
db: DatabaseGuard,
body: Ruma<search_events::Request<'_>>,
) -> ConduitResult<search_events::Response> {
body: Ruma<search_events::v3::Request>,
) -> Result<search_events::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let search_criteria = body.search_categories.room_events.as_ref().unwrap();
let filter = search_criteria.filter.clone().unwrap_or_default();
let filter = &search_criteria.filter;
let room_ids = filter.rooms.clone().unwrap_or_else(|| {
db.rooms
services()
.rooms
.state_cache
.rooms_joined(sender_user)
.filter_map(|r| r.ok())
.collect()
});
let limit = filter.limit.map_or(10, |l| u64::from(l) as usize);
// Use limit or else 10, with maximum 100
let limit = filter.limit.map_or(10, u64::from).min(100) as usize;
let mut searches = Vec::new();
for room_id in room_ids {
if !db.rooms.is_joined(sender_user, &room_id)? {
if !services()
.rooms
.state_cache
.is_joined(sender_user, &room_id)?
{
return Err(Error::BadRequest(
ErrorKind::Forbidden,
ErrorKind::forbidden(),
"You don't have permission to view this room.",
));
}
let search = db
if let Some(search) = services()
.rooms
.search_pdus(&room_id, &search_criteria.search_term)?;
searches.push(search.0.peekable());
.search
.search_pdus(&room_id, &search_criteria.search_term)?
{
searches.push(search.0.peekable());
}
}
let skip = match body.next_batch.as_ref().map(|s| s.parse()) {
@ -76,6 +82,22 @@ pub async fn search_events_route(
let results: Vec<_> = results
.iter()
.filter_map(|result| {
services()
.rooms
.timeline
.get_pdu_from_id(result)
.ok()?
.filter(|pdu| {
!pdu.is_redacted()
&& services()
.rooms
.state_accessor
.user_can_see_event(sender_user, &pdu.room_id, &pdu.event_id)
.unwrap_or(false)
})
.map(|pdu| pdu.to_room_event())
})
.map(|result| {
Ok::<_, Error>(SearchResult {
context: EventContextResult {
@ -86,10 +108,7 @@ pub async fn search_events_route(
start: None,
},
rank: None,
result: db
.rooms
.get_pdu_from_id(result)?
.map(|pdu| pdu.to_room_event()),
result: Some(result),
})
})
.filter_map(|r| r.ok())
@ -97,13 +116,13 @@ pub async fn search_events_route(
.take(limit)
.collect();
let next_batch = if results.len() < limit as usize {
let next_batch = if results.len() < limit {
None
} else {
Some((skip + limit).to_string())
};
Ok(search_events::Response::new(ResultCategories {
Ok(search_events::v3::Response::new(ResultCategories {
room_events: ResultRoomEvents {
count: Some((results.len() as u32).into()), // TODO: set this to none. Element shouldn't depend on it
groups: BTreeMap::new(), // TODO
@ -116,6 +135,5 @@ pub async fn search_events_route(
.map(str::to_lowercase)
.collect(),
},
})
.into())
}))
}

View file

@ -0,0 +1,279 @@
use super::{DEVICE_ID_LENGTH, TOKEN_LENGTH};
use crate::{services, utils, Error, Result, Ruma};
use ruma::{
api::client::{
error::ErrorKind,
session::{get_login_types, login, logout, logout_all},
uiaa::UserIdentifier,
},
UserId,
};
use serde::Deserialize;
use tracing::{info, warn};
#[derive(Debug, Deserialize)]
struct Claims {
sub: String,
//exp: usize,
}
/// # `GET /_matrix/client/r0/login`
///
/// Get the supported login types of this server. One of these should be used as the `type` field
/// when logging in.
pub async fn get_login_types_route(
_body: Ruma<get_login_types::v3::Request>,
) -> Result<get_login_types::v3::Response> {
Ok(get_login_types::v3::Response::new(vec![
get_login_types::v3::LoginType::Password(Default::default()),
get_login_types::v3::LoginType::ApplicationService(Default::default()),
]))
}
/// # `POST /_matrix/client/r0/login`
///
/// Authenticates the user and returns an access token it can use in subsequent requests.
///
/// - The user needs to authenticate using their password (or if enabled using a json web token)
/// - If `device_id` is known: invalidates old access token of that device
/// - If `device_id` is unknown: creates a new device
/// - Returns access token that is associated with the user and device
///
/// Note: You can use [`GET /_matrix/client/r0/login`](fn.get_supported_versions_route.html) to see
/// supported login types.
pub async fn login_route(body: Ruma<login::v3::Request>) -> Result<login::v3::Response> {
// To allow deprecated login methods
#![allow(deprecated)]
// Validate login method
// TODO: Other login methods
let user_id = match &body.login_info {
login::v3::LoginInfo::Password(login::v3::Password {
identifier,
password,
user,
address: _,
medium: _,
}) => {
let user_id = if let Some(UserIdentifier::UserIdOrLocalpart(user_id)) = identifier {
UserId::parse_with_server_name(
user_id.to_lowercase(),
services().globals.server_name(),
)
} else if let Some(user) = user {
UserId::parse(user)
} else {
warn!("Bad login type: {:?}", &body.login_info);
return Err(Error::BadRequest(ErrorKind::forbidden(), "Bad login type."));
}
.map_err(|_| Error::BadRequest(ErrorKind::InvalidUsername, "Username is invalid."))?;
if services().appservice.is_exclusive_user_id(&user_id).await {
return Err(Error::BadRequest(
ErrorKind::Exclusive,
"User id reserved by appservice.",
));
}
let hash = services()
.users
.password_hash(&user_id)?
.ok_or(Error::BadRequest(
ErrorKind::forbidden(),
"Wrong username or password.",
))?;
if hash.is_empty() {
return Err(Error::BadRequest(
ErrorKind::UserDeactivated,
"The user has been deactivated",
));
}
let hash_matches = argon2::verify_encoded(&hash, password.as_bytes()).unwrap_or(false);
if !hash_matches {
return Err(Error::BadRequest(
ErrorKind::forbidden(),
"Wrong username or password.",
));
}
user_id
}
login::v3::LoginInfo::Token(login::v3::Token { token }) => {
if let Some(jwt_decoding_key) = services().globals.jwt_decoding_key() {
let token = jsonwebtoken::decode::<Claims>(
token,
jwt_decoding_key,
&jsonwebtoken::Validation::default(),
)
.map_err(|_| Error::BadRequest(ErrorKind::InvalidUsername, "Token is invalid."))?;
let username = token.claims.sub.to_lowercase();
let user_id =
UserId::parse_with_server_name(username, services().globals.server_name())
.map_err(|_| {
Error::BadRequest(ErrorKind::InvalidUsername, "Username is invalid.")
})?;
if services().appservice.is_exclusive_user_id(&user_id).await {
return Err(Error::BadRequest(
ErrorKind::Exclusive,
"User id reserved by appservice.",
));
}
user_id
} else {
return Err(Error::BadRequest(
ErrorKind::Unknown,
"Token login is not supported (server has no jwt decoding key).",
));
}
}
login::v3::LoginInfo::ApplicationService(login::v3::ApplicationService {
identifier,
user,
}) => {
let user_id = if let Some(UserIdentifier::UserIdOrLocalpart(user_id)) = identifier {
UserId::parse_with_server_name(
user_id.to_lowercase(),
services().globals.server_name(),
)
} else if let Some(user) = user {
UserId::parse(user)
} else {
warn!("Bad login type: {:?}", &body.login_info);
return Err(Error::BadRequest(ErrorKind::forbidden(), "Bad login type."));
}
.map_err(|_| Error::BadRequest(ErrorKind::InvalidUsername, "Username is invalid."))?;
if let Some(ref info) = body.appservice_info {
if !info.is_user_match(&user_id) {
return Err(Error::BadRequest(
ErrorKind::Exclusive,
"User is not in namespace.",
));
}
} else {
return Err(Error::BadRequest(
ErrorKind::MissingToken,
"Missing appservice token.",
));
}
user_id
}
_ => {
warn!("Unsupported or unknown login type: {:?}", &body.login_info);
return Err(Error::BadRequest(
ErrorKind::Unknown,
"Unsupported login type.",
));
}
};
// Generate new device id if the user didn't specify one
let device_id = body
.device_id
.clone()
.unwrap_or_else(|| utils::random_string(DEVICE_ID_LENGTH).into());
// Generate a new token for the device
let token = utils::random_string(TOKEN_LENGTH);
// Determine if device_id was provided and exists in the db for this user
let device_exists = body.device_id.as_ref().map_or(false, |device_id| {
services()
.users
.all_device_ids(&user_id)
.any(|x| x.as_ref().map_or(false, |v| v == device_id))
});
if device_exists {
services().users.set_token(&user_id, &device_id, &token)?;
} else {
services().users.create_device(
&user_id,
&device_id,
&token,
body.initial_device_display_name.clone(),
)?;
}
info!("{} logged in", user_id);
// Homeservers are still required to send the `home_server` field
#[allow(deprecated)]
Ok(login::v3::Response {
user_id,
access_token: token,
home_server: Some(services().globals.server_name().to_owned()),
device_id,
well_known: None,
refresh_token: None,
expires_in: None,
})
}
/// # `POST /_matrix/client/r0/logout`
///
/// Log out the current device.
///
/// - Invalidates access token
/// - Deletes device metadata (device id, device display name, last seen ip, last seen ts)
/// - Forgets to-device events
/// - Triggers device list updates
pub async fn logout_route(body: Ruma<logout::v3::Request>) -> Result<logout::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let sender_device = body.sender_device.as_ref().expect("user is authenticated");
if let Some(ref info) = body.appservice_info {
if !info.is_user_match(sender_user) {
return Err(Error::BadRequest(
ErrorKind::Exclusive,
"User is not in namespace.",
));
}
}
services().users.remove_device(sender_user, sender_device)?;
Ok(logout::v3::Response::new())
}
/// # `POST /_matrix/client/r0/logout/all`
///
/// Log out all devices of this user.
///
/// - Invalidates all access tokens
/// - Deletes all device metadata (device id, device display name, last seen ip, last seen ts)
/// - Forgets all to-device events
/// - Triggers device list updates
///
/// Note: This is equivalent to calling [`GET /_matrix/client/r0/logout`](fn.logout_route.html)
/// from each device of this user.
pub async fn logout_all_route(
body: Ruma<logout_all::v3::Request>,
) -> Result<logout_all::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
if let Some(ref info) = body.appservice_info {
if !info.is_user_match(sender_user) {
return Err(Error::BadRequest(
ErrorKind::Exclusive,
"User is not in namespace.",
));
}
} else {
return Err(Error::BadRequest(
ErrorKind::MissingToken,
"Missing appservice token.",
));
}
for device_id in services().users.all_device_ids(sender_user).flatten() {
services().users.remove_device(sender_user, &device_id)?;
}
Ok(logout_all::v3::Response::new())
}

View file

@ -0,0 +1,34 @@
use crate::{services, Result, Ruma};
use ruma::api::client::space::get_hierarchy;
/// # `GET /_matrix/client/v1/rooms/{room_id}/hierarchy``
///
/// Paginates over the space tree in a depth-first manner to locate child rooms of a given space.
pub async fn get_hierarchy_route(
body: Ruma<get_hierarchy::v1::Request>,
) -> Result<get_hierarchy::v1::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let skip = body
.from
.as_ref()
.and_then(|s| s.parse::<usize>().ok())
.unwrap_or(0);
let limit = body.limit.map_or(10, u64::from).min(100) as usize;
let max_depth = body.max_depth.map_or(3, u64::from).min(10) as usize + 1; // +1 to skip the space room itself
services()
.rooms
.spaces
.get_hierarchy(
sender_user,
&body.room_id,
limit,
skip,
max_depth,
body.suggested_only,
)
.await
}

View file

@ -0,0 +1,266 @@
use std::sync::Arc;
use crate::{service::pdu::PduBuilder, services, Error, Result, Ruma, RumaResponse};
use ruma::{
api::client::{
error::ErrorKind,
state::{get_state_events, get_state_events_for_key, send_state_event},
},
events::{
room::canonical_alias::RoomCanonicalAliasEventContent, AnyStateEventContent, StateEventType,
},
serde::Raw,
EventId, MilliSecondsSinceUnixEpoch, RoomId, UserId,
};
use tracing::log::warn;
/// # `PUT /_matrix/client/r0/rooms/{roomId}/state/{eventType}/{stateKey}`
///
/// Sends a state event into the room.
///
/// - The only requirement for the content is that it has to be valid json
/// - Tries to send the event into the room, auth rules will determine if it is allowed
/// - If event is new canonical_alias: Rejects if alias is incorrect
pub async fn send_state_event_for_key_route(
body: Ruma<send_state_event::v3::Request>,
) -> Result<send_state_event::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let event_id = send_state_event_for_key_helper(
sender_user,
&body.room_id,
&body.event_type,
&body.body.body, // Yes, I hate it too
body.state_key.to_owned(),
if body.appservice_info.is_some() {
body.timestamp
} else {
None
},
)
.await?;
let event_id = (*event_id).to_owned();
Ok(send_state_event::v3::Response { event_id })
}
/// # `PUT /_matrix/client/r0/rooms/{roomId}/state/{eventType}`
///
/// Sends a state event into the room.
///
/// - The only requirement for the content is that it has to be valid json
/// - Tries to send the event into the room, auth rules will determine if it is allowed
/// - If event is new canonical_alias: Rejects if alias is incorrect
pub async fn send_state_event_for_empty_key_route(
body: Ruma<send_state_event::v3::Request>,
) -> Result<RumaResponse<send_state_event::v3::Response>> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
// Forbid m.room.encryption if encryption is disabled
if body.event_type == StateEventType::RoomEncryption && !services().globals.allow_encryption() {
return Err(Error::BadRequest(
ErrorKind::forbidden(),
"Encryption has been disabled",
));
}
let event_id = send_state_event_for_key_helper(
sender_user,
&body.room_id,
&body.event_type.to_string().into(),
&body.body.body,
body.state_key.to_owned(),
if body.appservice_info.is_some() {
body.timestamp
} else {
None
},
)
.await?;
let event_id = (*event_id).to_owned();
Ok(send_state_event::v3::Response { event_id }.into())
}
/// # `GET /_matrix/client/r0/rooms/{roomid}/state`
///
/// Get all state events for a room.
///
/// - If not joined: Only works if current room history visibility is world readable
pub async fn get_state_events_route(
body: Ruma<get_state_events::v3::Request>,
) -> Result<get_state_events::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
if !services()
.rooms
.state_accessor
.user_can_see_state_events(sender_user, &body.room_id)?
{
return Err(Error::BadRequest(
ErrorKind::forbidden(),
"You don't have permission to view the room state.",
));
}
Ok(get_state_events::v3::Response {
room_state: services()
.rooms
.state_accessor
.room_state_full(&body.room_id)
.await?
.values()
.map(|pdu| pdu.to_state_event())
.collect(),
})
}
/// # `GET /_matrix/client/r0/rooms/{roomid}/state/{eventType}/{stateKey}`
///
/// Get single state event of a room.
///
/// - If not joined: Only works if current room history visibility is world readable
pub async fn get_state_events_for_key_route(
body: Ruma<get_state_events_for_key::v3::Request>,
) -> Result<get_state_events_for_key::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
if !services()
.rooms
.state_accessor
.user_can_see_state_events(sender_user, &body.room_id)?
{
return Err(Error::BadRequest(
ErrorKind::forbidden(),
"You don't have permission to view the room state.",
));
}
let event = services()
.rooms
.state_accessor
.room_state_get(&body.room_id, &body.event_type, &body.state_key)?
.ok_or_else(|| {
warn!(
"State event {:?} not found in room {:?}",
&body.event_type, &body.room_id
);
Error::BadRequest(ErrorKind::NotFound, "State event not found.")
})?;
Ok(get_state_events_for_key::v3::Response {
content: serde_json::from_str(event.content.get())
.map_err(|_| Error::bad_database("Invalid event content in database"))?,
})
}
/// # `GET /_matrix/client/r0/rooms/{roomid}/state/{eventType}`
///
/// Get single state event of a room.
///
/// - If not joined: Only works if current room history visibility is world readable
pub async fn get_state_events_for_empty_key_route(
body: Ruma<get_state_events_for_key::v3::Request>,
) -> Result<RumaResponse<get_state_events_for_key::v3::Response>> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
if !services()
.rooms
.state_accessor
.user_can_see_state_events(sender_user, &body.room_id)?
{
return Err(Error::BadRequest(
ErrorKind::forbidden(),
"You don't have permission to view the room state.",
));
}
let event = services()
.rooms
.state_accessor
.room_state_get(&body.room_id, &body.event_type, "")?
.ok_or_else(|| {
warn!(
"State event {:?} not found in room {:?}",
&body.event_type, &body.room_id
);
Error::BadRequest(ErrorKind::NotFound, "State event not found.")
})?;
Ok(get_state_events_for_key::v3::Response {
content: serde_json::from_str(event.content.get())
.map_err(|_| Error::bad_database("Invalid event content in database"))?,
}
.into())
}
async fn send_state_event_for_key_helper(
sender: &UserId,
room_id: &RoomId,
event_type: &StateEventType,
json: &Raw<AnyStateEventContent>,
state_key: String,
timestamp: Option<MilliSecondsSinceUnixEpoch>,
) -> Result<Arc<EventId>> {
let sender_user = sender;
// TODO: Review this check, error if event is unparsable, use event type, allow alias if it
// previously existed
if let Ok(canonical_alias) =
serde_json::from_str::<RoomCanonicalAliasEventContent>(json.json().get())
{
let mut aliases = canonical_alias.alt_aliases.clone();
if let Some(alias) = canonical_alias.alias {
aliases.push(alias);
}
for alias in aliases {
if alias.server_name() != services().globals.server_name()
|| services()
.rooms
.alias
.resolve_local_alias(&alias)?
.filter(|room| room == room_id) // Make sure it's the right room
.is_none()
{
return Err(Error::BadRequest(
ErrorKind::forbidden(),
"You are only allowed to send canonical_alias \
events when it's aliases already exists",
));
}
}
}
let mutex_state = Arc::clone(
services()
.globals
.roomid_mutex_state
.write()
.await
.entry(room_id.to_owned())
.or_default(),
);
let state_lock = mutex_state.lock().await;
let event_id = services()
.rooms
.timeline
.build_and_append_pdu(
PduBuilder {
event_type: event_type.to_string().into(),
content: serde_json::from_str(json.json().get()).expect("content is valid json"),
unsigned: None,
state_key: Some(state_key),
redacts: None,
timestamp,
},
sender_user,
room_id,
&state_lock,
)
.await?;
Ok(event_id)
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,126 @@
use crate::{services, Error, Result, Ruma};
use ruma::{
api::client::tag::{create_tag, delete_tag, get_tags},
events::{
tag::{TagEvent, TagEventContent},
RoomAccountDataEventType,
},
};
use std::collections::BTreeMap;
/// # `PUT /_matrix/client/r0/user/{userId}/rooms/{roomId}/tags/{tag}`
///
/// Adds a tag to the room.
///
/// - Inserts the tag into the tag event of the room account data.
pub async fn update_tag_route(
body: Ruma<create_tag::v3::Request>,
) -> Result<create_tag::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let event = services().account_data.get(
Some(&body.room_id),
sender_user,
RoomAccountDataEventType::Tag,
)?;
let mut tags_event = event
.map(|e| {
serde_json::from_str(e.get())
.map_err(|_| Error::bad_database("Invalid account data event in db."))
})
.unwrap_or_else(|| {
Ok(TagEvent {
content: TagEventContent {
tags: BTreeMap::new(),
},
})
})?;
tags_event
.content
.tags
.insert(body.tag.clone().into(), body.tag_info.clone());
services().account_data.update(
Some(&body.room_id),
sender_user,
RoomAccountDataEventType::Tag,
&serde_json::to_value(tags_event).expect("to json value always works"),
)?;
Ok(create_tag::v3::Response {})
}
/// # `DELETE /_matrix/client/r0/user/{userId}/rooms/{roomId}/tags/{tag}`
///
/// Deletes a tag from the room.
///
/// - Removes the tag from the tag event of the room account data.
pub async fn delete_tag_route(
body: Ruma<delete_tag::v3::Request>,
) -> Result<delete_tag::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let event = services().account_data.get(
Some(&body.room_id),
sender_user,
RoomAccountDataEventType::Tag,
)?;
let mut tags_event = event
.map(|e| {
serde_json::from_str(e.get())
.map_err(|_| Error::bad_database("Invalid account data event in db."))
})
.unwrap_or_else(|| {
Ok(TagEvent {
content: TagEventContent {
tags: BTreeMap::new(),
},
})
})?;
tags_event.content.tags.remove(&body.tag.clone().into());
services().account_data.update(
Some(&body.room_id),
sender_user,
RoomAccountDataEventType::Tag,
&serde_json::to_value(tags_event).expect("to json value always works"),
)?;
Ok(delete_tag::v3::Response {})
}
/// # `GET /_matrix/client/r0/user/{userId}/rooms/{roomId}/tags`
///
/// Returns tags on the room.
///
/// - Gets the tag event of the room account data.
pub async fn get_tags_route(body: Ruma<get_tags::v3::Request>) -> Result<get_tags::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let event = services().account_data.get(
Some(&body.room_id),
sender_user,
RoomAccountDataEventType::Tag,
)?;
let tags_event = event
.map(|e| {
serde_json::from_str(e.get())
.map_err(|_| Error::bad_database("Invalid account data event in db."))
})
.unwrap_or_else(|| {
Ok(TagEvent {
content: TagEventContent {
tags: BTreeMap::new(),
},
})
})?;
Ok(get_tags::v3::Response {
tags: tags_event.content.tags,
})
}

View file

@ -0,0 +1,16 @@
use crate::{Result, Ruma};
use ruma::api::client::thirdparty::get_protocols;
use std::collections::BTreeMap;
/// # `GET /_matrix/client/r0/thirdparty/protocols`
///
/// TODO: Fetches all metadata about protocols supported by the homeserver.
pub async fn get_protocols_route(
_body: Ruma<get_protocols::v3::Request>,
) -> Result<get_protocols::v3::Response> {
// TODO
Ok(get_protocols::v3::Response {
protocols: BTreeMap::new(),
})
}

View file

@ -0,0 +1,49 @@
use ruma::api::client::{error::ErrorKind, threads::get_threads};
use crate::{services, Error, Result, Ruma};
/// # `GET /_matrix/client/r0/rooms/{roomId}/threads`
pub async fn get_threads_route(
body: Ruma<get_threads::v1::Request>,
) -> Result<get_threads::v1::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
// Use limit or else 10, with maximum 100
let limit = body
.limit
.and_then(|l| l.try_into().ok())
.unwrap_or(10)
.min(100);
let from = if let Some(from) = &body.from {
from.parse()
.map_err(|_| Error::BadRequest(ErrorKind::InvalidParam, ""))?
} else {
u64::MAX
};
let threads = services()
.rooms
.threads
.threads_until(sender_user, &body.room_id, from, &body.include)?
.take(limit)
.filter_map(|r| r.ok())
.filter(|(_, pdu)| {
services()
.rooms
.state_accessor
.user_can_see_event(sender_user, &body.room_id, &pdu.event_id)
.unwrap_or(false)
})
.collect::<Vec<_>>();
let next_batch = threads.last().map(|(count, _)| count.to_string());
Ok(get_threads::v1::Response {
chunk: threads
.into_iter()
.map(|(_, pdu)| pdu.to_room_event())
.collect(),
next_batch,
})
}

View file

@ -1,93 +1,81 @@
use std::collections::BTreeMap;
use crate::{database::DatabaseGuard, ConduitResult, Error, Ruma};
use crate::{services, Error, Result, Ruma};
use ruma::{
api::{
client::{error::ErrorKind, r0::to_device::send_event_to_device},
client::{error::ErrorKind, to_device::send_event_to_device},
federation::{self, transactions::edu::DirectDeviceContent},
},
events::EventType,
to_device::DeviceIdOrAllDevices,
};
#[cfg(feature = "conduit_bin")]
use rocket::put;
/// # `PUT /_matrix/client/r0/sendToDevice/{eventType}/{txnId}`
///
/// Send a to-device event to a set of client devices.
#[cfg_attr(
feature = "conduit_bin",
put("/_matrix/client/r0/sendToDevice/<_>/<_>", data = "<body>")
)]
#[tracing::instrument(skip(db, body))]
pub async fn send_event_to_device_route(
db: DatabaseGuard,
body: Ruma<send_event_to_device::Request<'_>>,
) -> ConduitResult<send_event_to_device::Response> {
body: Ruma<send_event_to_device::v3::Request>,
) -> Result<send_event_to_device::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let sender_device = body.sender_device.as_deref();
// TODO: uncomment when https://github.com/vector-im/element-android/issues/3589 is solved
// Check if this is a new transaction id
/*
if db
if services()
.transaction_ids
.existing_txnid(sender_user, sender_device, &body.txn_id)?
.is_some()
{
return Ok(send_event_to_device::Response.into());
return Ok(send_event_to_device::v3::Response {});
}
*/
for (target_user_id, map) in &body.messages {
for (target_device_id_maybe, event) in map {
if target_user_id.server_name() != db.globals.server_name() {
if target_user_id.server_name() != services().globals.server_name() {
let mut map = BTreeMap::new();
map.insert(target_device_id_maybe.clone(), event.clone());
let mut messages = BTreeMap::new();
messages.insert(target_user_id.clone(), map);
let count = services().globals.next_count()?;
db.sending.send_reliable_edu(
services().sending.send_reliable_edu(
target_user_id.server_name(),
serde_json::to_vec(&federation::transactions::edu::Edu::DirectToDevice(
DirectDeviceContent {
sender: sender_user.clone(),
ev_type: EventType::from(&body.event_type),
message_id: body.txn_id.clone(),
ev_type: body.event_type.clone(),
message_id: count.to_string().into(),
messages,
},
))
.expect("DirectToDevice EDU can be serialized"),
db.globals.next_count()?,
count,
)?;
continue;
}
match target_device_id_maybe {
DeviceIdOrAllDevices::DeviceId(target_device_id) => db.users.add_to_device_event(
sender_user,
target_user_id,
target_device_id,
&body.event_type,
event.deserialize_as().map_err(|_| {
Error::BadRequest(ErrorKind::InvalidParam, "Event is invalid")
})?,
&db.globals,
)?,
DeviceIdOrAllDevices::DeviceId(target_device_id) => {
services().users.add_to_device_event(
sender_user,
target_user_id,
target_device_id,
&body.event_type.to_string(),
event.deserialize_as().map_err(|_| {
Error::BadRequest(ErrorKind::InvalidParam, "Event is invalid")
})?,
)?
}
DeviceIdOrAllDevices::AllDevices => {
for target_device_id in db.users.all_device_ids(target_user_id) {
db.users.add_to_device_event(
for target_device_id in services().users.all_device_ids(target_user_id) {
services().users.add_to_device_event(
sender_user,
target_user_id,
&target_device_id?,
&body.event_type,
&body.event_type.to_string(),
event.deserialize_as().map_err(|_| {
Error::BadRequest(ErrorKind::InvalidParam, "Event is invalid")
})?,
&db.globals,
)?;
}
}
@ -96,10 +84,9 @@ pub async fn send_event_to_device_route(
}
// Save transaction id with empty data
db.transaction_ids
services()
.transaction_ids
.add_txnid(sender_user, sender_device, &body.txn_id, &[])?;
db.flush()?;
Ok(send_event_to_device::Response {}.into())
Ok(send_event_to_device::v3::Response {})
}

View file

@ -0,0 +1,46 @@
use crate::{services, utils, Error, Result, Ruma};
use ruma::api::client::{error::ErrorKind, typing::create_typing_event};
/// # `PUT /_matrix/client/r0/rooms/{roomId}/typing/{userId}`
///
/// Sets the typing state of the sender user.
pub async fn create_typing_event_route(
body: Ruma<create_typing_event::v3::Request>,
) -> Result<create_typing_event::v3::Response> {
use create_typing_event::v3::Typing;
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
if !services()
.rooms
.state_cache
.is_joined(sender_user, &body.room_id)?
{
return Err(Error::BadRequest(
ErrorKind::forbidden(),
"You are not in this room.",
));
}
if let Typing::Yes(duration) = body.state {
services()
.rooms
.edus
.typing
.typing_add(
sender_user,
&body.room_id,
duration.as_millis() as u64 + utils::millis_since_unix_epoch(),
)
.await?;
} else {
services()
.rooms
.edus
.typing
.typing_remove(sender_user, &body.room_id)
.await?;
}
Ok(create_typing_event::v3::Response {})
}

View file

@ -0,0 +1,37 @@
use std::{collections::BTreeMap, iter::FromIterator};
use ruma::api::client::discovery::get_supported_versions;
use crate::{Result, Ruma};
/// # `GET /_matrix/client/versions`
///
/// Get the versions of the specification and unstable features supported by this server.
///
/// - Versions take the form MAJOR.MINOR.PATCH
/// - Only the latest PATCH release will be reported for each MAJOR.MINOR value
/// - Unstable features are namespaced and may include version information in their name
///
/// Note: Unstable features are used while developing new features. Clients should avoid using
/// unstable features in their stable releases
pub async fn get_supported_versions_route(
_body: Ruma<get_supported_versions::Request>,
) -> Result<get_supported_versions::Response> {
let resp = get_supported_versions::Response {
versions: vec![
"r0.5.0".to_owned(),
"r0.6.0".to_owned(),
"v1.1".to_owned(),
"v1.2".to_owned(),
"v1.3".to_owned(),
"v1.4".to_owned(),
"v1.5".to_owned(),
],
unstable_features: BTreeMap::from_iter([
("org.matrix.e2e_cross_signing".to_owned(), true),
("org.matrix.msc3916.stable".to_owned(), true),
]),
};
Ok(resp)
}

View file

@ -0,0 +1,101 @@
use crate::{services, Result, Ruma};
use ruma::{
api::client::user_directory::search_users,
events::{
room::join_rules::{JoinRule, RoomJoinRulesEventContent},
StateEventType,
},
};
/// # `POST /_matrix/client/r0/user_directory/search`
///
/// Searches all known users for a match.
///
/// - Hides any local users that aren't in any public rooms (i.e. those that have the join rule set to public)
/// and don't share a room with the sender
pub async fn search_users_route(
body: Ruma<search_users::v3::Request>,
) -> Result<search_users::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let limit = u64::from(body.limit) as usize;
let mut users = services().users.iter().filter_map(|user_id| {
// Filter out buggy users (they should not exist, but you never know...)
let user_id = user_id.ok()?;
let user = search_users::v3::User {
user_id: user_id.clone(),
display_name: services().users.displayname(&user_id).ok()?,
avatar_url: services().users.avatar_url(&user_id).ok()?,
};
let user_id_matches = user
.user_id
.to_string()
.to_lowercase()
.contains(&body.search_term.to_lowercase());
let user_displayname_matches = user
.display_name
.as_ref()
.filter(|name| {
name.to_lowercase()
.contains(&body.search_term.to_lowercase())
})
.is_some();
if !user_id_matches && !user_displayname_matches {
return None;
}
// It's a matching user, but is the sender allowed to see them?
let mut user_visible = false;
let user_is_in_public_rooms = services()
.rooms
.state_cache
.rooms_joined(&user_id)
.filter_map(|r| r.ok())
.any(|room| {
services()
.rooms
.state_accessor
.room_state_get(&room, &StateEventType::RoomJoinRules, "")
.map_or(false, |event| {
event.map_or(false, |event| {
serde_json::from_str(event.content.get())
.map_or(false, |r: RoomJoinRulesEventContent| {
r.join_rule == JoinRule::Public
})
})
})
});
if user_is_in_public_rooms {
user_visible = true;
} else {
let user_is_in_shared_rooms = services()
.rooms
.user
.get_shared_rooms(vec![sender_user.clone(), user_id])
.ok()?
.next()
.is_some();
if user_is_in_shared_rooms {
user_visible = true;
}
}
if !user_visible {
return None;
}
Some(user)
});
let results = users.by_ref().take(limit).collect();
let limited = users.next().is_some();
Ok(search_users::v3::Response { results, limited })
}

View file

@ -0,0 +1,48 @@
use crate::{services, Result, Ruma};
use base64::{engine::general_purpose, Engine as _};
use hmac::{Hmac, Mac};
use ruma::{api::client::voip::get_turn_server_info, SecondsSinceUnixEpoch};
use sha1::Sha1;
use std::time::{Duration, SystemTime};
type HmacSha1 = Hmac<Sha1>;
/// # `GET /_matrix/client/r0/voip/turnServer`
///
/// TODO: Returns information about the recommended turn server.
pub async fn turn_server_route(
body: Ruma<get_turn_server_info::v3::Request>,
) -> Result<get_turn_server_info::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let turn_secret = services().globals.turn_secret().clone();
let (username, password) = if !turn_secret.is_empty() {
let expiry = SecondsSinceUnixEpoch::from_system_time(
SystemTime::now() + Duration::from_secs(services().globals.turn_ttl()),
)
.expect("time is valid");
let username: String = format!("{}:{}", expiry.get(), sender_user);
let mut mac = HmacSha1::new_from_slice(turn_secret.as_bytes())
.expect("HMAC can take key of any size");
mac.update(username.as_bytes());
let password: String = general_purpose::STANDARD.encode(mac.finalize().into_bytes());
(username, password)
} else {
(
services().globals.turn_username().clone(),
services().globals.turn_password().clone(),
)
};
Ok(get_turn_server_info::v3::Response {
username,
password,
uris: services().globals.turn_uris().to_vec(),
ttl: Duration::from_secs(services().globals.turn_ttl()),
})
}

View file

@ -0,0 +1,22 @@
use ruma::api::client::discovery::discover_homeserver::{
self, HomeserverInfo, SlidingSyncProxyInfo,
};
use crate::{services, Result, Ruma};
/// # `GET /.well-known/matrix/client`
///
/// Returns the client server discovery information.
pub async fn well_known_client(
_body: Ruma<discover_homeserver::Request>,
) -> Result<discover_homeserver::Response> {
let client_url = services().globals.well_known_client();
Ok(discover_homeserver::Response {
homeserver: HomeserverInfo {
base_url: client_url.clone(),
},
identity_server: None,
sliding_sync_proxy: Some(SlidingSyncProxyInfo { url: client_url }),
})
}

4
src/api/mod.rs Normal file
View file

@ -0,0 +1,4 @@
pub mod appservice_server;
pub mod client_server;
pub mod ruma_wrapper;
pub mod server_server;

View file

@ -0,0 +1,369 @@
use std::{collections::BTreeMap, iter::FromIterator, str};
use axum::{
async_trait,
body::Body,
extract::{FromRequest, Path},
response::{IntoResponse, Response},
RequestExt, RequestPartsExt,
};
use axum_extra::{
headers::{authorization::Bearer, Authorization},
typed_header::TypedHeaderRejectionReason,
TypedHeader,
};
use bytes::{BufMut, BytesMut};
use http::{Request, StatusCode};
use ruma::{
api::{client::error::ErrorKind, AuthScheme, IncomingRequest, OutgoingResponse},
server_util::authorization::XMatrix,
CanonicalJsonValue, MilliSecondsSinceUnixEpoch, OwnedDeviceId, OwnedUserId, UserId,
};
use serde::Deserialize;
use tracing::{debug, error, warn};
use super::{Ruma, RumaResponse};
use crate::{service::appservice::RegistrationInfo, services, Error, Result};
enum Token {
Appservice(Box<RegistrationInfo>),
User((OwnedUserId, OwnedDeviceId)),
Invalid,
None,
}
#[async_trait]
impl<T, S> FromRequest<S> for Ruma<T>
where
T: IncomingRequest,
{
type Rejection = Error;
async fn from_request(req: Request<Body>, _state: &S) -> Result<Self, Self::Rejection> {
#[derive(Deserialize)]
struct QueryParams {
access_token: Option<String>,
user_id: Option<String>,
}
let (mut parts, mut body) = {
let limited_req = req.with_limited_body();
let (parts, body) = limited_req.into_parts();
let body = axum::body::to_bytes(
body,
services()
.globals
.max_request_size()
.try_into()
.unwrap_or(usize::MAX),
)
.await
.map_err(|_| Error::BadRequest(ErrorKind::MissingToken, "Missing token."))?;
(parts, body)
};
let metadata = T::METADATA;
let auth_header: Option<TypedHeader<Authorization<Bearer>>> = parts.extract().await?;
let path_params: Path<Vec<String>> = parts.extract().await?;
let query = parts.uri.query().unwrap_or_default();
let query_params: QueryParams = match serde_html_form::from_str(query) {
Ok(params) => params,
Err(e) => {
error!(%query, "Failed to deserialize query parameters: {}", e);
return Err(Error::BadRequest(
ErrorKind::Unknown,
"Failed to read query parameters",
));
}
};
let token = match &auth_header {
Some(TypedHeader(Authorization(bearer))) => Some(bearer.token()),
None => query_params.access_token.as_deref(),
};
let token = if let Some(token) = token {
if let Some(reg_info) = services().appservice.find_from_token(token).await {
Token::Appservice(Box::new(reg_info.clone()))
} else if let Some((user_id, device_id)) = services().users.find_from_token(token)? {
Token::User((user_id, OwnedDeviceId::from(device_id)))
} else {
Token::Invalid
}
} else {
Token::None
};
let mut json_body = serde_json::from_slice::<CanonicalJsonValue>(&body).ok();
let (sender_user, sender_device, sender_servername, appservice_info) =
match (metadata.authentication, token) {
(_, Token::Invalid) => {
// OpenID endpoint uses a query param with the same name, drop this once query params for user auth are removed from the spec
if query_params.access_token.is_some() {
(None, None, None, None)
} else {
return Err(Error::BadRequest(
ErrorKind::UnknownToken { soft_logout: false },
"Unknown access token.",
));
}
}
(AuthScheme::AccessToken, Token::Appservice(info)) => {
let user_id = query_params
.user_id
.map_or_else(
|| {
UserId::parse_with_server_name(
info.registration.sender_localpart.as_str(),
services().globals.server_name(),
)
},
UserId::parse,
)
.map_err(|_| {
Error::BadRequest(ErrorKind::InvalidUsername, "Username is invalid.")
})?;
if !info.is_user_match(&user_id) {
return Err(Error::BadRequest(
ErrorKind::Exclusive,
"User is not in namespace.",
));
}
if !services().users.exists(&user_id)? {
return Err(Error::BadRequest(
ErrorKind::forbidden(),
"User does not exist.",
));
}
(Some(user_id), None, None, Some(*info))
}
(
AuthScheme::None
| AuthScheme::AppserviceToken
| AuthScheme::AccessTokenOptional,
Token::Appservice(info),
) => (None, None, None, Some(*info)),
(AuthScheme::AccessToken, Token::None) => {
return Err(Error::BadRequest(
ErrorKind::MissingToken,
"Missing access token.",
));
}
(
AuthScheme::AccessToken | AuthScheme::AccessTokenOptional | AuthScheme::None,
Token::User((user_id, device_id)),
) => (Some(user_id), Some(device_id), None, None),
(AuthScheme::ServerSignatures, Token::None) => {
let TypedHeader(Authorization(x_matrix)) = parts
.extract::<TypedHeader<Authorization<XMatrix>>>()
.await
.map_err(|e| {
warn!("Missing or invalid Authorization header: {}", e);
let msg = match e.reason() {
TypedHeaderRejectionReason::Missing => {
"Missing Authorization header."
}
TypedHeaderRejectionReason::Error(_) => {
"Invalid X-Matrix signatures."
}
_ => "Unknown header-related error",
};
Error::BadRequest(ErrorKind::forbidden(), msg)
})?;
if let Some(dest) = x_matrix.destination {
if dest != services().globals.server_name() {
return Err(Error::BadRequest(
ErrorKind::Unauthorized,
"X-Matrix destination field does not match server name.",
));
}
};
let origin_signatures = BTreeMap::from_iter([(
x_matrix.key.clone(),
CanonicalJsonValue::String(x_matrix.sig.to_string()),
)]);
let signatures = BTreeMap::from_iter([(
x_matrix.origin.as_str().to_owned(),
CanonicalJsonValue::Object(
origin_signatures
.into_iter()
.map(|(k, v)| (k.to_string(), v))
.collect(),
),
)]);
let mut request_map = BTreeMap::from_iter([
(
"method".to_owned(),
CanonicalJsonValue::String(parts.method.to_string()),
),
(
"uri".to_owned(),
CanonicalJsonValue::String(parts.uri.to_string()),
),
(
"origin".to_owned(),
CanonicalJsonValue::String(x_matrix.origin.as_str().to_owned()),
),
(
"destination".to_owned(),
CanonicalJsonValue::String(
services().globals.server_name().as_str().to_owned(),
),
),
(
"signatures".to_owned(),
CanonicalJsonValue::Object(signatures),
),
]);
if let Some(json_body) = &json_body {
request_map.insert("content".to_owned(), json_body.clone());
};
let keys_result = services()
.rooms
.event_handler
.fetch_signing_keys(&x_matrix.origin, vec![x_matrix.key.to_string()], false)
.await;
let keys = match keys_result {
Ok(b) => b,
Err(e) => {
warn!("Failed to fetch signing keys: {}", e);
return Err(Error::BadRequest(
ErrorKind::forbidden(),
"Failed to fetch signing keys.",
));
}
};
// Only verify_keys that are currently valid should be used for validating requests
// as per MSC4029
let pub_key_map = BTreeMap::from_iter([(
x_matrix.origin.as_str().to_owned(),
if keys.valid_until_ts > MilliSecondsSinceUnixEpoch::now() {
keys.verify_keys
.into_iter()
.map(|(id, key)| (id, key.key))
.collect()
} else {
BTreeMap::new()
},
)]);
match ruma::signatures::verify_json(&pub_key_map, &request_map) {
Ok(()) => (None, None, Some(x_matrix.origin), None),
Err(e) => {
warn!(
"Failed to verify json request from {}: {}\n{:?}",
x_matrix.origin, e, request_map
);
if parts.uri.to_string().contains('@') {
warn!(
"Request uri contained '@' character. Make sure your \
reverse proxy gives Conduit the raw uri (apache: use \
nocanon)"
);
}
return Err(Error::BadRequest(
ErrorKind::forbidden(),
"Failed to verify X-Matrix signatures.",
));
}
}
}
(
AuthScheme::None
| AuthScheme::AppserviceToken
| AuthScheme::AccessTokenOptional,
Token::None,
) => (None, None, None, None),
(AuthScheme::ServerSignatures, Token::Appservice(_) | Token::User(_)) => {
return Err(Error::BadRequest(
ErrorKind::Unauthorized,
"Only server signatures should be used on this endpoint.",
));
}
(AuthScheme::AppserviceToken, Token::User(_)) => {
return Err(Error::BadRequest(
ErrorKind::Unauthorized,
"Only appservice access tokens should be used on this endpoint.",
));
}
};
let mut http_request = Request::builder().uri(parts.uri).method(parts.method);
*http_request.headers_mut().unwrap() = parts.headers;
if let Some(CanonicalJsonValue::Object(json_body)) = &mut json_body {
let user_id = sender_user.clone().unwrap_or_else(|| {
UserId::parse_with_server_name("", services().globals.server_name())
.expect("we know this is valid")
});
let uiaa_request = json_body
.get("auth")
.and_then(|auth| auth.as_object())
.and_then(|auth| auth.get("session"))
.and_then(|session| session.as_str())
.and_then(|session| {
services().uiaa.get_uiaa_request(
&user_id,
&sender_device.clone().unwrap_or_else(|| "".into()),
session,
)
});
if let Some(CanonicalJsonValue::Object(initial_request)) = uiaa_request {
for (key, value) in initial_request {
json_body.entry(key).or_insert(value);
}
}
let mut buf = BytesMut::new().writer();
serde_json::to_writer(&mut buf, json_body).expect("value serialization can't fail");
body = buf.into_inner().freeze();
}
let http_request = http_request.body(&*body).unwrap();
debug!("{:?}", http_request);
let body = T::try_from_http_request(http_request, &path_params).map_err(|e| {
warn!("try_from_http_request failed: {:?}", e);
debug!("JSON body: {:?}", json_body);
Error::BadRequest(ErrorKind::BadJson, "Failed to deserialize request.")
})?;
Ok(Ruma {
body,
sender_user,
sender_device,
sender_servername,
appservice_info,
json_body,
})
}
}
impl<T: OutgoingResponse> IntoResponse for RumaResponse<T> {
fn into_response(self) -> Response {
match self.0.try_into_http_response::<BytesMut>() {
Ok(res) => res.map(BytesMut::freeze).map(Body::from).into_response(),
Err(_) => StatusCode::INTERNAL_SERVER_ERROR.into_response(),
}
}
}

Some files were not shown because too many files have changed in this diff Show more