Synapse: Matrix reference homeserver

Overview

Synapse (get support on #synapse:matrix.org) (discuss development on #synapse-dev:matrix.org) (check license in LICENSE file) (latest version released on PyPi) (supported python versions)

Introduction

Matrix is an ambitious new ecosystem for open federated Instant Messaging and VoIP. The basics you need to know to get up and running are:

  • Everything in Matrix happens in a room. Rooms are distributed and do not exist on any single server. Rooms can be located using convenience aliases like #matrix:matrix.org or #test:localhost:8448.
  • Matrix user IDs look like @matthew:matrix.org (although in the future you will normally refer to yourself and others using a third party identifier (3PID): email address, phone number, etc rather than manipulating Matrix user IDs)

The overall architecture is:

client <----> homeserver <=====================> homeserver <----> client
       https://somewhere.org/_matrix      https://elsewhere.net/_matrix

#matrix:matrix.org is the official support room for Matrix, and can be accessed by any client from https://matrix.org/docs/projects/try-matrix-now.html or via IRC bridge at irc://irc.freenode.net/matrix.

Synapse is currently in rapid development, but as of version 0.5 we believe it is sufficiently stable to be run as an internet-facing service for real usage!

About Matrix

Matrix specifies a set of pragmatic RESTful HTTP JSON APIs as an open standard, which handle:

  • Creating and managing fully distributed chat rooms with no single points of control or failure
  • Eventually-consistent cryptographically secure synchronisation of room state across a global open network of federated servers and services
  • Sending and receiving extensible messages in a room with (optional) end-to-end encryption
  • Inviting, joining, leaving, kicking, banning room members
  • Managing user accounts (registration, login, logout)
  • Using 3rd Party IDs (3PIDs) such as email addresses, phone numbers, Facebook accounts to authenticate, identify and discover users on Matrix.
  • Placing 1:1 VoIP and Video calls

These APIs are intended to be implemented on a wide range of servers, services and clients, letting developers build messaging and VoIP functionality on top of the entirely open Matrix ecosystem rather than using closed or proprietary solutions. The hope is for Matrix to act as the building blocks for a new generation of fully open and interoperable messaging and VoIP apps for the internet.

Synapse is a reference "homeserver" implementation of Matrix from the core development team at matrix.org, written in Python/Twisted. It is intended to showcase the concept of Matrix and let folks see the spec in the context of a codebase and let you run your own homeserver and generally help bootstrap the ecosystem.

In Matrix, every user runs one or more Matrix clients, which connect through to a Matrix homeserver. The homeserver stores all their personal chat history and user account information - much as a mail client connects through to an IMAP/SMTP server. Just like email, you can either run your own Matrix homeserver and control and own your own communications and history or use one hosted by someone else (e.g. matrix.org) - there is no single point of control or mandatory service provider in Matrix, unlike WhatsApp, Facebook, Hangouts, etc.

We'd like to invite you to join #matrix:matrix.org (via https://matrix.org/docs/projects/try-matrix-now.html), run a homeserver, take a look at the Matrix spec, and experiment with the APIs and Client SDKs.

Thanks for using Matrix!

Support

For support installing or managing Synapse, please join #synapse:matrix.org (from a matrix.org account if necessary) and ask questions there. We do not use GitHub issues for support requests, only for bug reports and feature requests.

Synapse Installation

  • For details on how to install synapse, see INSTALL.md.
  • For specific details on how to configure Synapse for federation see docs/federate.md

Connecting to Synapse from a client

The easiest way to try out your new Synapse installation is by connecting to it from a web client.

Unless you are running a test instance of Synapse on your local machine, in general, you will need to enable TLS support before you can successfully connect from a client: see INSTALL.md#tls-certificates.

An easy way to get started is to login or register via Element at https://app.element.io/#/login or https://app.element.io/#/register respectively. You will need to change the server you are logging into from matrix.org and instead specify a Homeserver URL of https://<server_name>:8448 (or just https://<server_name> if you are using a reverse proxy). If you prefer to use another client, refer to our client breakdown.

If all goes well you should at least be able to log in, create a room, and start sending messages.

Registering a new user from a client

By default, registration of new users via Matrix clients is disabled. To enable it, specify enable_registration: true in homeserver.yaml. (It is then recommended to also set up CAPTCHA - see docs/CAPTCHA_SETUP.md.)

Once enable_registration is set to true, it is possible to register a user via a Matrix client.

Your new user name will be formed partly from the server_name, and partly from a localpart you specify when you create the account. Your name will take the form of:

@localpart:my.domain.name

(pronounced "at localpart on my dot domain dot name").

As when logging in, you will need to specify a "Custom server". Specify your desired localpart in the 'User name' box.

ACME setup

For details on having Synapse manage your federation TLS certificates automatically, please see docs/ACME.md.

Security Note

Matrix serves raw user generated data in some APIs - specifically the content repository endpoints.

Whilst we have tried to mitigate against possible XSS attacks (e.g. https://github.com/matrix-org/synapse/pull/1021) we recommend running matrix homeservers on a dedicated domain name, to limit any malicious user generated content served to web browsers a matrix API from being able to attack webapps hosted on the same domain. This is particularly true of sharing a matrix webclient and server on the same domain.

See https://github.com/vector-im/riot-web/issues/1977 and https://developer.github.com/changes/2014-04-25-user-content-security for more details.

Upgrading an existing Synapse

The instructions for upgrading synapse are in UPGRADE.rst. Please check these instructions as upgrading may require extra steps for some versions of synapse.

Using a reverse proxy with Synapse

It is recommended to put a reverse proxy such as nginx, Apache, Caddy or HAProxy in front of Synapse. One advantage of doing so is that it means that you can expose the default https port (443) to Matrix clients without needing to run Synapse with root privileges.

For information on configuring one, see docs/reverse_proxy.md.

Identity Servers

Identity servers have the job of mapping email addresses and other 3rd Party IDs (3PIDs) to Matrix user IDs, as well as verifying the ownership of 3PIDs before creating that mapping.

They are not where accounts or credentials are stored - these live on home servers. Identity Servers are just for mapping 3rd party IDs to matrix IDs.

This process is very security-sensitive, as there is obvious risk of spam if it is too easy to sign up for Matrix accounts or harvest 3PID data. In the longer term, we hope to create a decentralised system to manage it (matrix-doc #712), but in the meantime, the role of managing trusted identity in the Matrix ecosystem is farmed out to a cluster of known trusted ecosystem partners, who run 'Matrix Identity Servers' such as Sydent, whose role is purely to authenticate and track 3PID logins and publish end-user public keys.

You can host your own copy of Sydent, but this will prevent you reaching other users in the Matrix ecosystem via their email address, and prevent them finding you. We therefore recommend that you use one of the centralised identity servers at https://matrix.org or https://vector.im for now.

To reiterate: the Identity server will only be used if you choose to associate an email address with your account, or send an invite to another user via their email address.

Password reset

Users can reset their password through their client. Alternatively, a server admin can reset a users password using the admin API or by directly editing the database as shown below.

First calculate the hash of the new password:

$ ~/synapse/env/bin/hash_password
Password:
Confirm password:
$2a$12$xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Then update the users table in the database:

UPDATE users SET password_hash='$2a$12$xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
    WHERE name='@test:test.com';

Synapse Development

Join our developer community on Matrix: #synapse-dev:matrix.org

Before setting up a development environment for synapse, make sure you have the system dependencies (such as the python header files) installed - see Installing from source.

To check out a synapse for development, clone the git repo into a working directory of your choice:

git clone https://github.com/matrix-org/synapse.git
cd synapse

Synapse has a number of external dependencies, that are easiest to install using pip and a virtualenv:

python3 -m venv ./env
source ./env/bin/activate
pip install -e ".[all,test]"

This will run a process of downloading and installing all the needed dependencies into a virtual env. If any dependencies fail to install, try installing the failing modules individually:

pip install -e "module-name"

Once this is done, you may wish to run Synapse's unit tests to check that everything is installed correctly:

python -m twisted.trial tests

This should end with a 'PASSED' result (note that exact numbers will differ):

Ran 1337 tests in 716.064s

PASSED (skips=15, successes=1322)

We recommend using the demo which starts 3 federated instances running on ports 8080 - 8082

./demo/start.sh

(to stop, you can use ./demo/stop.sh)

If you just want to start a single instance of the app and run it directly:

# Create the homeserver.yaml config once
python -m synapse.app.homeserver \
  --server-name my.domain.name \
  --config-path homeserver.yaml \
  --generate-config \
  --report-stats=[yes|no]

# Start the app
python -m synapse.app.homeserver --config-path homeserver.yaml

Running the Integration Tests

Synapse is accompanied by SyTest, a Matrix homeserver integration testing suite, which uses HTTP requests to access the API as a Matrix client would. It is able to run Synapse directly from the source tree, so installation of the server is not required.

Testing with SyTest is recommended for verifying that changes related to the Client-Server API are functioning correctly. See the installation instructions for details.

Troubleshooting

Need help? Join our community support room on Matrix: #synapse:matrix.org

Running out of File Handles

If synapse runs out of file handles, it typically fails badly - live-locking at 100% CPU, and/or failing to accept new TCP connections (blocking the connecting client). Matrix currently can legitimately use a lot of file handles, thanks to busy rooms like #matrix:matrix.org containing hundreds of participating servers. The first time a server talks in a room it will try to connect simultaneously to all participating servers, which could exhaust the available file descriptors between DNS queries & HTTPS sockets, especially if DNS is slow to respond. (We need to improve the routing algorithm used to be better than full mesh, but as of March 2019 this hasn't happened yet).

If you hit this failure mode, we recommend increasing the maximum number of open file handles to be at least 4096 (assuming a default of 1024 or 256). This is typically done by editing /etc/security/limits.conf

Separately, Synapse may leak file handles if inbound HTTP requests get stuck during processing - e.g. blocked behind a lock or talking to a remote server etc. This is best diagnosed by matching up the 'Received request' and 'Processed request' log lines and looking for any 'Processed request' lines which take more than a few seconds to execute. Please let us know at #synapse:matrix.org if you see this failure mode so we can help debug it, however.

Help!! Synapse is slow and eats all my RAM/CPU!

First, ensure you are running the latest version of Synapse, using Python 3 with a PostgreSQL database.

Synapse's architecture is quite RAM hungry currently - we deliberately cache a lot of recent room data and metadata in RAM in order to speed up common requests. We'll improve this in the future, but for now the easiest way to either reduce the RAM usage (at the risk of slowing things down) is to set the almost-undocumented SYNAPSE_CACHE_FACTOR environment variable. The default is 0.5, which can be decreased to reduce RAM usage in memory constrained enviroments, or increased if performance starts to degrade.

However, degraded performance due to a low cache factor, common on machines with slow disks, often leads to explosions in memory use due backlogged requests. In this case, reducing the cache factor will make things worse. Instead, try increasing it drastically. 2.0 is a good starting value.

Using libjemalloc can also yield a significant improvement in overall memory use, and especially in terms of giving back RAM to the OS. To use it, the library must simply be put in the LD_PRELOAD environment variable when launching Synapse. On Debian, this can be done by installing the libjemalloc1 package and adding this line to /etc/default/matrix-synapse:

LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1

This can make a significant difference on Python 2.7 - it's unclear how much of an improvement it provides on Python 3.x.

If you're encountering high CPU use by the Synapse process itself, you may be affected by a bug with presence tracking that leads to a massive excess of outgoing federation requests (see discussion). If metrics indicate that your server is also issuing far more outgoing federation requests than can be accounted for by your users' activity, this is a likely cause. The misbehavior can be worked around by setting use_presence: false in the Synapse config file.

People can't accept room invitations from me

The typical failure mode here is that you send an invitation to someone to join a room or direct chat, but when they go to accept it, they get an error (typically along the lines of "Invalid signature"). They might see something like the following in their logs:

2019-09-11 19:32:04,271 - synapse.federation.transport.server - 288 - WARNING - GET-11752 - authenticate_request failed: 401: Invalid signature for server <server> with key ed25519:a_EqML: Unable to verify signature for <server>

This is normally caused by a misconfiguration in your reverse-proxy. See docs/reverse_proxy.md and double-check that your settings are correct.

Comments
  • memory leak since 1.53.0

    memory leak since 1.53.0

    Description

    Since upgrade tomatrix-synapse-py3==1.53.0+focal1 from 1.49.2+bionic1 i observe memory leak on my instance. The upgrade is concomitant to OS upgrade from Ubuntu bionic => focal / Python 3.6 to 3.8

    We didn't change homeserver.yaml during upgrade

    Our machine had 3 GB memory for 2 years and now 10G isn't enough.

    Steps to reproduce

    root@srv-matrix1:~# systemctl status matrix-synapse.service ● matrix-synapse.service - Synapse Matrix homeserver Loaded: loaded (/lib/systemd/system/matrix-synapse.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2022-03-04 10:21:05 CET; 4h 45min ago Process: 171067 ExecStartPre=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homese> Main PID: 171075 (python) Tasks: 30 (limit: 11811) Memory: 6.1G CGroup: /system.slice/matrix-synapse.service └─171075 /opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml ->

    image

    I tried to change this config without success expiry_time: 30m

    syslogs says oom killer killed synapse: Mar 4 10:20:54 XXXX kernel: [174841.111273] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/matrix-synapse.service,task=python,pid=143210,uid=114 Mar 4 10:20:54 srv-matrix1 kernel: [174841.111339] Out of memory: Killed process 143210 (python) total-vm:12564520kB, anon-rss:9073668kB, file-rss:0kB, shmem-rss:0kB, UID:114 pgtables:21244kB oom_score_adj:0

    no further usefull information in homeserver.log

    Version information

    $ curl http://localhost:8008/_synapse/admin/v1/server_version {"server_version":"1.53.0","python_version":"3.8.10"}

    • Version: 1.53.0

    • Install method: Ubuntu apt repo

    • Platform: VMWare

    I could be happy to help getting python stacktrack to debug this, if I have any lead how to do so.

    (sorry for my english)

    A-Presence S-Minor T-Defect O-Occasional A-Memory-Usage 
    opened by lchanouha 59
  • @kyrias's HS on develop is leaking inbound FDs and filling up with CRITICAL endpoint errors

    @kyrias's HS on develop is leaking inbound FDs and filling up with CRITICAL endpoint errors

    sierra:borklogs matthew$ cat homeserver.log | grep CRITICAL -A20 | grep '^[a-zA-Z]' | grep -v Traceback | sort | uniq -c
      31 AlreadyCalledError
     601 IndexError: pop from empty list
    

    Superficially it looks like the number of 'pop from empty list' errors matches the number of leaked inbound connections. Actual failed requests look like:

    2017-01-07 13:55:10,882 - synapse.access.https.8448 - 59 - INFO - PUT-53227- 91.134.136.82 - 8448 - Received request: PUT /_matrix/federation/v1/send/1483649379640/
    2017-01-07 13:55:10,985 - synapse.metrics - 162 - INFO - PUT-53227- Collecting gc 0
    2017-01-07 13:55:11,058 - twisted - 131 - INFO - PUT-53227- Starting factory <twisted.web.client._HTTP11ClientFactory instance at 0x7f388017d8c0>
    2017-01-07 13:55:11,061 - twisted - 131 - INFO - PUT-53227- Starting factory <twisted.web.client._HTTP11ClientFactory instance at 0x7f3882e401b8>
    2017-01-07 13:55:11,063 - twisted - 131 - INFO - PUT-53227- Starting factory <twisted.web.client._HTTP11ClientFactory instance at 0x7f387faaba70>
    2017-01-07 13:55:11,065 - twisted - 131 - INFO - PUT-53227- Starting factory <twisted.web.client._HTTP11ClientFactory instance at 0x7f388fd86290>
    2017-01-07 13:55:11,068 - twisted - 131 - INFO - PUT-53227- Starting factory <twisted.web.client._HTTP11ClientFactory instance at 0x7f3889b140e0>
    2017-01-07 13:55:11,375 - twisted - 131 - INFO - PUT-53227- Stopping factory <twisted.web.client._HTTP11ClientFactory instance at 0x7f3882e401b8>
    2017-01-07 13:55:11,412 - twisted - 131 - INFO - PUT-53227- Stopping factory <twisted.web.client._HTTP11ClientFactory instance at 0x7f387faaba70>
    2017-01-07 13:55:11,456 - twisted - 131 - CRITICAL - PUT-53227- Unhandled error in Deferred:
    2017-01-07 13:55:11,457 - twisted - 131 - CRITICAL - PUT-53227- 
    2017-01-07 13:55:11,458 - twisted - 131 - CRITICAL - PUT-53227- Unhandled error in Deferred:
    2017-01-07 13:55:11,458 - twisted - 131 - CRITICAL - PUT-53227- 
    2017-01-07 13:55:11,467 - twisted - 131 - INFO - PUT-53227- Starting factory <twisted.web.client._HTTP11ClientFactory instance at 0x7f3882e401b8>
    2017-01-07 13:55:11,474 - twisted - 131 - INFO - PUT-53227- Starting factory <twisted.web.client._HTTP11ClientFactory instance at 0x7f387faaba70>
    2017-01-07 13:55:11,625 - twisted - 131 - CRITICAL - PUT-53227- Unhandled error in Deferred:
    2017-01-07 13:55:11,625 - twisted - 131 - CRITICAL - PUT-53227- 
    2017-01-07 13:55:11,789 - synapse.federation.transport.server - 138 - INFO - PUT-53227- Request from kolm.io
    2017-01-07 13:55:11,873 - synapse.federation.transport.server - 244 - INFO - PUT-53227- Received txn 1483649379640 from kolm.io. (PDUs: 0, EDUs: 1, failures: 0)
    2017-01-07 13:55:12,235 - synapse.access.https.8448 - 91 - INFO - PUT-53227- 91.134.136.82 - 8448 - {kolm.io} Processed request: 1352ms (336ms, 10ms) (21ms/4) 11B 200 "PUT /_matrix/federation/v1/send/1483649379640/ HTTP/1.1" "Synapse/0.18.6-rc3"
    

    Presumably the 'Starting factory' are bogus log contexts though...

    opened by ara4n 56
  • Forward extremities accumulate and lead to poor performance

    Forward extremities accumulate and lead to poor performance

    TLDR: To determine if you are affected by this problem, run the following query:

    select room_id, count(*) c from event_forward_extremities group by room_id order by c desc limit 20;
    

    Any rows showing a count of more than a handful (say 10) are cause for concern. You can probably gain some respite by running the query at https://github.com/matrix-org/synapse/issues/1760#issuecomment-379183539.


    Whilst investigating the cause of heap usage spikes in synapse, correlating jumps in RSZ with logs showed that 'resolving state for !curbaf with 49 groups' loglines took ages to execute and would temporarily take loads of heap (resulting in a permenant hike in RSZ, as python is bad at reclaiming heap).

    On looking at the groups being resolved, it turns out that these were the extremities of the current room, and whenever the synapse queries the current room state, it has to merge these all together, whose implementation is currently very slow. To clear the extremities, one has to talk in the room (each message 'heals' 10 extremities, as max prev-events for a message is 10).

    Problems here are:

    • [ ] Why are we accumulating so many extremities? I assume it's whenever there's some downtime the graph breaks, leaving a dangling node.
    • [ ] Is there a way to stop them accumulating by healing or discarding them on launch (e.g. by sending a null healing event into the room)?
    • [ ] Why is state resolution so incredibly heavy? There should hardly be any conflicting state here, unless the bifurcation has been going on for months. Is it because to auth potential conflicts we have to load all auth events, which include every m.room.member
    • [ ] Logs of a state resolution happening from arasphere at DEBUG show lots of thrashing on the rejections table too.
    • [ ] We're also seeing ominous pauses in the logging of requests which resolve state, as if there's some lock we're contending for. (This might be the same as #1774)
    • [ ] Can we just insert dummy nodes in our local copy of the DAG after doing a successful state resolution, to avoid having to constantly re-calculate it or rely on naive caching?
    A-Federation 
    opened by ara4n 54
  • Can't join nor leave a space after running buggy version of portdb

    Can't join nor leave a space after running buggy version of portdb

    Description

    I was in a space and for some reason I wanted to leave and come back to check something. I couldn't leave so I asked the admin to expulse me. Then I tried to join again that space. Another user invited me. Now I get the invitation message offering me to accept or reject. None of the two action are working. Accept is taking for ever without any answer, Reject is telling me "internal server error". And now, no other user of my server can't join this space.

    Steps to reproduce

    • Have been invited
    • can't join
    • can't reject

    Homeserver

    defis.info

    Synapse Version

    1.63

    Installation Method

    Debian packages from packages.matrix.org

    Platform

    proxmox container with debian behind a nginx reverse proxy

    Relevant log output

    2022-07-31 18:26:09,066 - synapse.http.server - 187 - ERROR - POST-4175 - Failed handle request via 'RoomMembershipRestServlet': <XForwardedForRequest at 0x7f39e5085400 method='POST' uri='/_matrix/client/r0/rooms/!nsxQkuXAmwktOKyUNc%3Amatrix.org/leave' clientproto='HTTP/1.0' site='8008'>
    Traceback (most recent call last):
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/twisted/internet/defer.py", line 1660, in _inlineCallbacks
        result = current_context.run(gen.send, result)
    StopIteration: {}
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/synapse/http/server.py", line 366, in _async_render_wrapper
        callback_return = await self._async_render(request)
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/synapse/http/server.py", line 572, in _async_render
        callback_return = await raw_callback_return
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/synapse/rest/client/room.py", line 897, in on_POST
        content=event_content,
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/synapse/handlers/room_member.py", line 542, in update_membership
        state_event_ids=state_event_ids,
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/synapse/handlers/room_member.py", line 972, in update_membership_locked
        outlier=outlier,
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/synapse/handlers/room_member.py", line 400, in _local_membership_update
        ratelimit=ratelimit,
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/synapse/util/metrics.py", line 113, in measured_func
        r = await func(self, *args, **kwargs)
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/synapse/handlers/message.py", line 1291, in handle_new_client_event
        event, context
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/synapse/handlers/event_auth.py", line 58, in check_auth_rules_from_context
        await check_state_independent_auth_rules(self._store, event)
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/synapse/event_auth.py", line 169, in check_state_independent_auth_rules
        f"Event {event.event_id} has unknown auth event {auth_event_id}"
    RuntimeError: Event $KlfvzNGRRo-PB9i-avTFd4FAqYHu9sIHupgo8R5yoYU has unknown auth event $bXSYbo7eqRelhC8mEQxb9gY6iTjhjHGJAu6A7p086ec
    2022-07-31 18:26:09,067 - synapse.access.http.8008 - 471 - INFO - POST-4175 - 2.10.223.85 - 8008 - {@thatoo:defis.info} Processed request: 0.132sec/0.000sec (0.001sec, 0.000sec) (0.001sec/0.125sec/3) 55B 500 "POST /_matrix/client/r0/rooms/!nsxQkuXAmwktOKyUNc%3Amatrix.org/leave HTTP/1.0" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) SchildiChat/1.10.12-sc.1 Chrome/98.0.4758.141 Electron/17.4.0 Safari/537.36" [2 dbevts]
    
    
    2022-07-31 18:16:57,375 - synapse.http.matrixfederationclient - 307 - INFO - POST-1967 - {GET-O-20866} [matrix.org] Completed request: 200 OK in 0.65 secs, got 179 bytes - GET matrix://matrix.org/_matrix/federation/v1/query/directory?room_alias=%23monnaie-libre%3Amatrix.org
    2022-07-31 18:16:57,377 - synapse.http.matrixfederationclient - 307 - INFO - _process_i
    2022-07-31 18:16:57,463 - synapse.http.server - 187 - ERROR - POST-1967 - Failed handle request via 'JoinRoomAliasServlet': <XForwardedForRequest at 0x7f3a05c13b70 method='POST' uri='/_matrix/client/r0/join/%23monnaie-libre%3Amatrix.org' clientproto='HTTP/1.0' site='8008'>
    Traceback (most recent call last):
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/twisted/internet/defer.py", line 1660, in _inlineCallbacks
        result = current_context.run(gen.send, result)
    StopIteration: {}
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/synapse/http/server.py", line 366, in _async_render_wrapper
        callback_return = await self._async_render(request)
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/synapse/http/server.py", line 572, in _async_render
        callback_return = await raw_callback_return
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/synapse/rest/client/room.py", line 343, in on_POST
        third_party_signed=content.get("third_party_signed", None),
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/synapse/handlers/room_member.py", line 542, in update_membership
        state_event_ids=state_event_ids,
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/synapse/handlers/room_member.py", line 972, in update_membership_locked
        outlier=outlier,
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/synapse/handlers/room_member.py", line 400, in _local_membership_update
        ratelimit=ratelimit,
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/synapse/util/metrics.py", line 113, in measured_func
        r = await func(self, *args, **kwargs)
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/synapse/handlers/message.py", line 1291, in handle_new_client_event
        event, context
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/synapse/handlers/event_auth.py", line 58, in check_auth_rules_from_context
        await check_state_independent_auth_rules(self._store, event)
      File "/opt/venvs/matrix-synapse/lib/python3.7/site-packages/synapse/event_auth.py", line 169, in check_state_independent_auth_rules
        f"Event {event.event_id} has unknown auth event {auth_event_id}"
    RuntimeError: Event $YnLQaYcjfxHWiIPFpmisATyU1IcMC2hKXCNWUeK_Hos has unknown auth event $d07wypTuQbxA5SUD3OwBPBUNl1mCF6ydmDwsdUGhxrw
    2022-07-31 18:16:57,464 - synapse.access.http.8008 - 471 - INFO - POST-1967 - 2.10.223.85 - 8008 - {@ANOTHERUSERTHANthatoo:defis.info} Processed request: 0.736sec/0.000sec (0.001sec, 0.002sec) (0.001sec/0.075sec/7) 55B 500 "POST /_matrix/client/r0/join/%23monnaie-libre%3Amatrix.org HTTP/1.0" "Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0" [4 dbevts]
    

    Anything else that would be useful to know?

    the space is #monnaie-libre:matrix.org and it's id is !nsxQkuXAmwktOKyUNc:matrix.org

    X-Needs-Info A-Spaces 
    opened by Thatoo 52
  • Clients repeatedly losing connection for up to a minute

    Clients repeatedly losing connection for up to a minute

    Apologies if this isn't actually a bug, but my server had been running well until updating a couple of weeks ago. Unfortunately I didn't catch the exact version as I was travelling and didn't notice the issue between a number of updates.

    Description

    Throughout the day, clients will show a message saying that they can't connect to the server for ~30 seconds to a minute. Every time I've checked the current log to see what's up it's shown the following:

    synapse.metrics._gc - 120 - INFO - sentinel - Collecting gc 1
    synapse.metrics._gc - 120 - INFO - sentinel - Collecting gc 2
    

    Steps to reproduce

    • have a client connected to the server active for a while
    • before too much time goes by it shows as disconnected from the server
    • once logging happens after sentinel - Collecting gc 2 connectivity resumes

    I've noticed that synapse.metrics._gc - 120 - INFO - sentinel - Collecting gc 1 will appear in the logs on its own and I won't have any connectivity issues when that happens.

    Version information

    • Homeserver: matrix.darkcloud.ca
    • Synapse Version: 1.57.0
    • Install method: pacman (the official archlinux package)
    • Environment: archlinux + postgres + no workers + no containers + vps with 2 gigs of ram and 2 gigs of swap (on SSD)

    I could probably provide more information about behaviour if I know what I'm looking for.

    Thanks for your time and for all the work on this project!

    S-Minor T-Defect X-Needs-Info 
    opened by prurigro 50
  • Synapse makes postgres to eat all ram until OOM

    Synapse makes postgres to eat all ram until OOM

    Description

    Synapse makes related postgres processes to slowly eat all ram (16g) and swap until OOM hits.

    Steps to reproduce

    For this I do'nt have anything to really tell, will provide tried things later on.

    Version information

    • hacklab.fi
    • Synapse 1.25.0
    • Postgres 13 (same issue with 12)
    • Debuan buster
    • matrix.org provided Synapse repo

    When postgres and synapse is started, all synapse related postgres threads has RES quite exactly 250mb, after 12h or so their RES is approaching 1200mb each, median 1100. Obviously I have tried all the normal postgres tunables but nothing really makes perceivable difference on this behaviour. Restarting either synapse or postgres process will make memory go back to normal RES, just to climb away to OOM again

    I have tried to dig what could possibly cause this behaviour, but at this point I an out of ideas. On comparable but maybe doubly bigger HS kapsi.fi that does have redis workers and whatnot enabled, does not experience this behaviour, postgres process is well happy with memory pretty much it has at startup.

    Here is postgres=# select datname, usename, pid, backend_start, xact_start, query_start, state_change, wait_event_type, wait_event, state, query from pg_stat_activity;

       datname   |   usename    |  pid   |         backend_start         |          xact_start           |          query_start          |         state_change          | wait_event_type |     wait_event      | state  |                                                                                       query                                                                                       
    -------------+--------------+--------+-------------------------------+-------------------------------+-------------------------------+-------------------------------+-----------------+---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                 | postgres     | 172769 | 2021-01-20 10:02:07.119053+02 |                               |                               |                               | Activity        | LogicalLauncherMain |        | 
     synapse     | synapse_user | 172776 | 2021-01-20 10:02:08.283794+02 |                               | 2021-01-20 19:31:02.698945+02 | 2021-01-20 19:31:02.698961+02 | Client          | ClientRead          | idle   | COMMIT
     synapse     | synapse_user | 172774 | 2021-01-20 10:02:08.277446+02 |                               | 2021-01-20 19:31:02.693968+02 | 2021-01-20 19:31:02.693981+02 | Client          | ClientRead          | idle   | COMMIT
     synapse     | synapse_user | 172775 | 2021-01-20 10:02:08.277738+02 |                               | 2021-01-20 19:31:02.703326+02 | 2021-01-20 19:31:02.70335+02  | Client          | ClientRead          | idle   | COMMIT
     synapse     | synapse_user | 172778 | 2021-01-20 10:02:08.28457+02  |                               | 2021-01-20 19:31:02.695861+02 | 2021-01-20 19:31:02.695879+02 | Client          | ClientRead          | idle   | COMMIT
     synapse     | synapse_user | 172777 | 2021-01-20 10:02:08.284047+02 |                               | 2021-01-20 19:31:02.697951+02 | 2021-01-20 19:31:02.697974+02 | Client          | ClientRead          | idle   | COMMIT
     synapse     | synapse_user | 172779 | 2021-01-20 10:02:08.303174+02 |                               | 2021-01-20 19:31:02.691738+02 | 2021-01-20 19:31:02.691757+02 | Client          | ClientRead          | idle   | COMMIT
     synapse     | synapse_user | 172780 | 2021-01-20 10:02:08.313032+02 |                               | 2021-01-20 19:31:02.692357+02 | 2021-01-20 19:31:02.692368+02 | Client          | ClientRead          | idle   | COMMIT
     synapse     | synapse_user | 172781 | 2021-01-20 10:02:08.313392+02 |                               | 2021-01-20 19:31:02.691576+02 | 2021-01-20 19:31:02.691586+02 | Client          | ClientRead          | idle   | COMMIT
     synapse     | synapse_user | 172782 | 2021-01-20 10:02:08.320273+02 |                               | 2021-01-20 19:31:02.690884+02 | 2021-01-20 19:31:02.690911+02 | Client          | ClientRead          | idle   | COMMIT
     synapse     | synapse_user | 172783 | 2021-01-20 10:02:08.321661+02 |                               | 2021-01-20 19:31:02.693378+02 | 2021-01-20 19:31:02.693389+02 | Client          | ClientRead          | idle   | COMMIT
     telegrambot | telegrambot  | 182339 | 2021-01-20 19:05:41.200507+02 |                               | 2021-01-20 19:30:59.350267+02 | 2021-01-20 19:30:59.350449+02 | Client          | ClientRead          | idle   | SELECT pg_advisory_unlock_all();                                                                                                                                                 +
                 |              |        |                               |                               |                               |                               |                 |                     |        | CLOSE ALL;                                                                                                                                                                       +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                                                                                                                                                                                  +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                 DO $$                                                                                                                                                            +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                 BEGIN                                                                                                                                                            +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                     PERFORM * FROM pg_listening_channels() LIMIT 1;                                                                                                              +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                     IF FOUND THEN                                                                                                                                                +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                         UNLISTEN *;                                                                                                                                              +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                     END IF;                                                                                                                                                      +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                 END;                                                                                                                                                             +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                 $$;                                                                                                                                                              +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                                                                                                                                                                                  +
                 |              |        |                               |                               |                               |                               |                 |                     |        | RESET ALL;
     postgres    | postgres     | 182778 | 2021-01-20 19:30:59.687796+02 | 2021-01-20 19:31:02.683108+02 | 2021-01-20 19:31:02.683108+02 | 2021-01-20 19:31:02.683109+02 |                 |                     | active | select datname, usename, pid, backend_start, xact_start, query_start, state_change, wait_event_type, wait_event, state, query from pg_stat_activity;
     facebookbot | facebookbot  | 172786 | 2021-01-20 10:02:08.41789+02  |                               | 2021-01-20 19:30:32.971077+02 | 2021-01-20 19:30:32.971258+02 | Client          | ClientRead          | idle   | SELECT pg_advisory_unlock_all();                                                                                                                                                 +
                 |              |        |                               |                               |                               |                               |                 |                     |        | CLOSE ALL;                                                                                                                                                                       +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                                                                                                                                                                                  +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                 DO $$                                                                                                                                                            +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                 BEGIN                                                                                                                                                            +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                     PERFORM * FROM pg_listening_channels() LIMIT 1;                                                                                                              +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                     IF FOUND THEN                                                                                                                                                +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                         UNLISTEN *;                                                                                                                                              +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                     END IF;                                                                                                                                                      +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                 END;                                                                                                                                                             +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                 $$;                                                                                                                                                              +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                                                                                                                                                                                  +
                 |              |        |                               |                               |                               |                               |                 |                     |        | RESET ALL;
     telegrambot | telegrambot  | 172794 | 2021-01-20 10:02:09.385851+02 |                               | 2021-01-20 19:28:32.072588+02 | 2021-01-20 19:28:32.074156+02 | Client          | ClientRead          | idle   | COMMIT
     grafana     | grafana      | 181467 | 2021-01-20 18:18:29.98819+02  |                               | 2021-01-20 19:31:00.981616+02 | 2021-01-20 19:31:00.981806+02 | Client          | ClientRead          | idle   | select * from alert
     telegrambot | telegrambot  | 172802 | 2021-01-20 10:02:20.980309+02 |                               | 2021-01-20 19:28:06.589256+02 | 2021-01-20 19:28:06.589273+02 | Client          | ClientRead          | idle   | COMMIT
     telegrambot | telegrambot  | 172803 | 2021-01-20 10:02:20.997652+02 |                               | 2021-01-20 19:28:32.168638+02 | 2021-01-20 19:28:32.170706+02 | Client          | ClientRead          | idle   | COMMIT
     telegrambot | telegrambot  | 172804 | 2021-01-20 10:02:21.01352+02  |                               | 2021-01-20 19:28:32.171649+02 | 2021-01-20 19:28:32.171689+02 | Client          | ClientRead          | idle   | COMMIT
     telegrambot | telegrambot  | 172805 | 2021-01-20 10:02:21.023916+02 |                               | 2021-01-20 19:28:32.076235+02 | 2021-01-20 19:28:32.076275+02 | Client          | ClientRead          | idle   | ROLLBACK
     signalbot   | signalbot    | 172813 | 2021-01-20 10:02:32.943974+02 |                               | 2021-01-20 19:30:45.81808+02  | 2021-01-20 19:30:45.81825+02  | Client          | ClientRead          | idle   | SELECT pg_advisory_unlock_all();                                                                                                                                                 +
                 |              |        |                               |                               |                               |                               |                 |                     |        | CLOSE ALL;                                                                                                                                                                       +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                                                                                                                                                                                  +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                 DO $$                                                                                                                                                            +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                 BEGIN                                                                                                                                                            +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                     PERFORM * FROM pg_listening_channels() LIMIT 1;                                                                                                              +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                     IF FOUND THEN                                                                                                                                                +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                         UNLISTEN *;                                                                                                                                              +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                     END IF;                                                                                                                                                      +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                 END;                                                                                                                                                             +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                 $$;                                                                                                                                                              +
                 |              |        |                               |                               |                               |                               |                 |                     |        |                                                                                                                                                                                  +
                 |              |        |                               |                               |                               |                               |                 |                     |        | RESET ALL;
     facebookbot | facebookbot  | 172866 | 2021-01-20 10:05:53.116227+02 |                               | 2021-01-20 19:08:07.541813+02 | 2021-01-20 19:08:07.541822+02 | Client          | ClientRead          | idle   | ROLLBACK
     grafana     | grafana      | 181810 | 2021-01-20 18:38:29.988601+02 |                               | 2021-01-20 19:30:50.981635+02 | 2021-01-20 19:30:50.981968+02 | Client          | ClientRead          | idle   | select * from alert
     whatsappbot | whatsappbot  | 182449 | 2021-01-20 19:13:13.567375+02 |                               | 2021-01-20 19:30:52.996283+02 | 2021-01-20 19:30:52.997808+02 | Client          | ClientRead          | idle   | UPDATE puppet SET displayname=$1, name_quality=$2, avatar=$3, avatar_url=$4, custom_mxid=$5, access_token=$6, next_batch=$7, enable_presence=$8, enable_receipts=$9 WHERE jid=$10
     whatsappbot | whatsappbot  | 182441 | 2021-01-20 19:13:13.551931+02 |                               | 2021-01-20 19:30:57.959069+02 | 2021-01-20 19:30:57.960742+02 | Client          | ClientRead          | idle   | UPDATE puppet SET displayname=$1, name_quality=$2, avatar=$3, avatar_url=$4, custom_mxid=$5, access_token=$6, next_batch=$7, enable_presence=$8, enable_receipts=$9 WHERE jid=$10
                 |              | 172766 | 2021-01-20 10:02:07.118342+02 |                               |                               |                               | Activity        | BgWriterMain        |        | 
                 |              | 172765 | 2021-01-20 10:02:07.118096+02 |                               |                               |                               | Activity        | CheckpointerMain    |        | 
                 |              | 172767 | 2021-01-20 10:02:07.118604+02 |                               |                               |                               | Activity        | WalWriterMain       |        | 
    (28 rows)
    

    I've also tried to run (gdb) p MemoryContextStats(TopMemoryContext) onto one such process, here are results from startup and then when ram almost eaten:

    TopMemoryContext: 154592 total in 8 blocks; 47496 free (98 chunks); 107096 used
      pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 1448 free (0 chunks); 6744 used
      HandleParallelMessages: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
      RI compare cache: 16384 total in 2 blocks; 6656 free (3 chunks); 9728 used
      RI query cache: 8192 total in 1 blocks; 1576 free (0 chunks); 6616 used
      RI constraint cache: 40888 total in 2 blocks; 2616 free (0 chunks); 38272 used
      Sequence values: 8192 total in 1 blocks; 1576 free (0 chunks); 6616 used
      Btree proof lookup cache: 8192 total in 1 blocks; 552 free (0 chunks); 7640 used
      CFuncHash: 8192 total in 1 blocks; 552 free (0 chunks); 7640 used
      Tsearch dictionary cache: 8192 total in 1 blocks; 1576 free (0 chunks); 6616 used
      Tsearch parser cache: 8192 total in 1 blocks; 1576 free (0 chunks); 6616 used
      Tsearch configuration cache: 8192 total in 1 blocks; 1576 free (0 chunks); 6616 used
      TableSpace cache: 8192 total in 1 blocks; 2088 free (0 chunks); 6104 used
      Type information cache: 24616 total in 2 blocks; 2616 free (0 chunks); 22000 used
      Operator lookup cache: 24576 total in 2 blocks; 10752 free (3 chunks); 13824 used
      RowDescriptionContext: 8192 total in 1 blocks; 6888 free (0 chunks); 1304 used
      MessageContext: 8192 total in 1 blocks; 6888 free (1 chunks); 1304 used
      Operator class cache: 8192 total in 1 blocks; 552 free (0 chunks); 7640 used
      smgr relation table: 131072 total in 5 blocks; 73928 free (19 chunks); 57144 used
      TransactionAbortContext: 32768 total in 1 blocks; 32504 free (0 chunks); 264 used
      Portal hash: 8192 total in 1 blocks; 552 free (0 chunks); 7640 used
      TopPortalContext: 8192 total in 1 blocks; 7928 free (1 chunks); 264 used
      Relcache by OID: 32768 total in 3 blocks; 10488 free (6 chunks); 22280 used
      CacheMemoryContext: 4689520 total in 32 blocks; 1746360 free (0 chunks); 2943160 used
        index info: 2048 total in 2 blocks; 904 free (0 chunks); 1144 used: room_retention_max_lifetime_idx
        index info: 2048 total in 2 blocks; 608 free (1 chunks); 1440 used: room_retention_pkey
        index info: 2048 total in 2 blocks; 528 free (1 chunks); 1520 used: pg_toast_17486_index
        index info: 3072 total in 2 blocks; 1080 free (1 chunks); 1992 used: room_tag_uniqueness
        CachedPlan: 8192 total in 4 blocks; 3464 free (0 chunks); 4728 used: SELECT 1 FROM ONLY "public"."access_tokens" x WHERE "id" OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF ...
        CachedPlan: 8192 total in 4 blocks; 2864 free (0 chunks); 5328 used: SELECT 1 FROM ONLY "public"."events" x WHERE "event_id" OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x
        index info: 2048 total in 2 blocks; 872 free (0 chunks); 1176 used: event_push_summary_stream_ordering_lock_key
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: ex_outlier_stream_pkey
        index info: 2048 total in 2 blocks; 792 free (0 chunks); 1256 used: group_attestations_renewals_v_idx
        index info: 2048 total in 2 blocks; 792 free (0 chunks); 1256 used: group_attestations_renewals_u_idx
        index info: 2048 total in 2 blocks; 416 free (1 chunks); 1632 used: group_attestations_renewals_g_idx
        index info: 1024 total in 1 blocks; 0 free (0 chunks); 1024 used: monthly_active_users_users
        index info: 1024 total in 1 blocks; 0 free (0 chunks); 1024 used: monthly_active_users_time_stamp
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: user_signature_stream_idx
        index info: 2048 total in 2 blocks; 904 free (0 chunks); 1144 used: ui_auth_sessions_session_id_key
        index info: 3072 total in 2 blocks; 664 free (1 chunks); 2408 used: e2e_one_time_keys_json_uniqueness
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: e2e_room_keys_versions_idx
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: access_tokens_device_id
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: access_tokens_token_key
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: access_tokens_pkey
        CachedPlanSource: 4096 total in 3 blocks; 1416 free (0 chunks); 2680 used: SELECT 1 FROM ONLY "public"."access_tokens" x WHERE "id" OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF ...
          CachedPlanQuery: 4096 total in 3 blocks; 1192 free (1 chunks); 2904 used
        SPI Plan: 1024 total in 1 blocks; 576 free (0 chunks); 448 used
        CachedPlanSource: 4096 total in 3 blocks; 1416 free (0 chunks); 2680 used: SELECT 1 FROM ONLY "public"."events" x WHERE "event_id" OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x
          CachedPlanQuery: 4096 total in 3 blocks; 592 free (0 chunks); 3504 used
        SPI Plan: 1024 total in 1 blocks; 576 free (0 chunks); 448 used
        index info: 2048 total in 2 blocks; 528 free (1 chunks); 1520 used: pg_toast_17607_index
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: state_events_event_id_key
        index info: 2048 total in 2 blocks; 840 free (0 chunks); 1208 used: evauth_edges_id
        index info: 2048 total in 2 blocks; 792 free (0 chunks); 1256 used: local_current_membership_room_idx
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: local_current_membership_idx
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: pg_toast_17742_index
        TS dictionary: 1024 total in 1 blocks; 688 free (0 chunks); 336 used: simple
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: room_stats_state_room
        index info: 2048 total in 2 blocks; 416 free (1 chunks); 1632 used: device_lists_remote_cache_unique_id
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: user_daily_visits_uts_idx
        index info: 2048 total in 2 blocks; 904 free (0 chunks); 1144 used: user_daily_visits_ts_idx
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: state_groups_room_id_idx
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: state_groups_pkey
        index info: 3072 total in 2 blocks; 968 free (1 chunks); 2104 used: users_who_share_private_rooms_u_idx
        index info: 2048 total in 2 blocks; 792 free (0 chunks); 1256 used: users_who_share_private_rooms_r_idx
        index info: 2048 total in 2 blocks; 792 free (0 chunks); 1256 used: users_who_share_private_rooms_o_idx
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: users_in_public_rooms_u_idx
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: users_in_public_rooms_r_idx
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: user_directory_search_user_idx
        index info: 4096 total in 3 blocks; 2256 free (2 chunks); 1840 used: user_directory_search_fts_idx
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: user_directory_user_idx
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: user_directory_room_idx
        index info: 2048 total in 2 blocks; 904 free (0 chunks); 1144 used: erased_users_user
        index info: 2048 total in 2 blocks; 904 free (0 chunks); 1144 used: group_attestations_remote_v_idx
        index info: 2048 total in 2 blocks; 904 free (0 chunks); 1144 used: group_attestations_remote_u_idx
        index info: 2048 total in 2 blocks; 608 free (1 chunks); 1440 used: group_attestations_remote_g_idx
        index info: 2048 total in 2 blocks; 656 free (1 chunks); 1392 used: group_roles_group_id_role_id_key
        index info: 3072 total in 2 blocks; 1048 free (1 chunks); 2024 used: group_summary_roles_group_id_role_id_role_order_key
        index info: 2048 total in 2 blocks; 904 free (0 chunks); 1144 used: group_summary_users_g_idx
        index info: 2048 total in 2 blocks; 656 free (1 chunks); 1392 used: group_room_categories_group_id_category_id_key
        index info: 3072 total in 2 blocks; 1048 free (1 chunks); 2024 used: group_summary_room_categories_group_id_category_id_cat_orde_key
        index info: 3072 total in 2 blocks; 1080 free (1 chunks); 1992 used: group_summary_rooms_g_idx
        index info: 3072 total in 2 blocks; 1064 free (1 chunks); 2008 used: group_summary_rooms_group_id_category_id_room_id_room_order_key
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: group_rooms_r_idx
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: group_rooms_g_idx
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: user_stats_historical_end_ts
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: user_stats_historical_pkey
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: user_stats_current_pkey
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: current_state_delta_stream_idx
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: device_federation_outbox_id
        index info: 2048 total in 2 blocks; 496 free (1 chunks); 1552 used: device_federation_outbox_destination_id
        index info: 2048 total in 2 blocks; 496 free (1 chunks); 1552 used: device_lists_outbound_pokes_user
        index info: 2048 total in 2 blocks; 792 free (0 chunks); 1256 used: device_lists_outbound_pokes_stream
        index info: 2048 total in 2 blocks; 528 free (1 chunks); 1520 used: device_lists_outbound_pokes_id
        index info: 2048 total in 2 blocks; 792 free (0 chunks); 1256 used: cache_invalidation_stream_by_instance_id
        index info: 2048 total in 2 blocks; 792 free (0 chunks); 1256 used: e2e_cross_signing_keys_stream_idx
        index info: 3072 total in 2 blocks; 840 free (1 chunks); 2232 used: e2e_cross_signing_keys_idx
        index info: 2048 total in 2 blocks; 920 free (0 chunks); 1128 used: groups_idx
        index info: 3072 total in 2 blocks; 840 free (1 chunks); 2232 used: event_relations_relates
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: event_relations_id
        index info: 2048 total in 2 blocks; 528 free (1 chunks); 1520 used: local_group_membership_u_idx
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: local_group_membership_g_idx
        index info: 3072 total in 2 blocks; 840 free (1 chunks); 2232 used: server_keys_json_uniqueness
        index info: 2048 total in 2 blocks; 872 free (0 chunks); 1176 used: private_user_data_max_stream_id_lock_key
        index info: 3072 total in 2 blocks; 696 free (1 chunks); 2376 used: event_txn_id_txn_id
        index info: 2048 total in 2 blocks; 840 free (0 chunks); 1208 used: event_txn_id_ts
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: event_txn_id_event_id
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: stream_positions_idx
        index info: 3072 total in 2 blocks; 696 free (1 chunks); 2376 used: e2e_room_keys_with_version_idx
        index info: 3072 total in 2 blocks; 840 free (1 chunks); 2232 used: device_inbox_user_stream_id
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: device_inbox_stream_id_user_id
        index info: 2048 total in 2 blocks; 608 free (1 chunks); 1440 used: room_tag_revisions_uniqueness
        index info: 4096 total in 3 blocks; 2184 free (2 chunks); 1912 used: room_memberships_user_room_forgotten
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: room_memberships_user_id
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: room_memberships_room_id
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: room_memberships_event_id_key
        index info: 3544 total in 3 blocks; 432 free (0 chunks); 3112 used: remote_media_repository_thumbn_media_origin_id_width_height_met
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: public_room_index
        index info: 2048 total in 2 blocks; 840 free (0 chunks); 1208 used: rooms_pkey
        index info: 2048 total in 2 blocks; 840 free (0 chunks); 1208 used: pushers2_pkey
        index info: 3072 total in 2 blocks; 808 free (1 chunks); 2264 used: pushers2_app_id_pushkey_user_name_key
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: application_services_txns_id
        index info: 2048 total in 2 blocks; 416 free (1 chunks); 1632 used: application_services_txns_as_id_txn_id_key
        index info: 2048 total in 2 blocks; 872 free (0 chunks); 1176 used: user_directory_stream_pos_lock_key
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: room_account_data_stream_id
        index info: 3072 total in 2 blocks; 840 free (1 chunks); 2232 used: room_account_data_uniqueness
        187 more child contexts containing 402664 total in 360 blocks; 126120 free (131 chunks); 276544 used
      WAL record construction: 49768 total in 2 blocks; 6360 free (0 chunks); 43408 used
      PrivateRefCount: 8192 total in 1 blocks; 2616 free (0 chunks); 5576 used
      MdSmgr: 16384 total in 2 blocks; 3944 free (3 chunks); 12440 used
      LOCALLOCK hash: 32768 total in 3 blocks; 16824 free (8 chunks); 15944 used
      Timezones: 104120 total in 2 blocks; 2616 free (0 chunks); 101504 used
      ErrorContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
    Grand total: 6137360 bytes in 653 blocks; 2226664 free (318 chunks); 3910696 used
    
    TopMemoryContext: 154592 total in 8 blocks; 47496 free (98 chunks); 107096 used
      pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 1448 free (0 chunks); 6744 used
      HandleParallelMessages: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
      RI compare cache: 16384 total in 2 blocks; 6656 free (3 chunks); 9728 used
      RI query cache: 8192 total in 1 blocks; 1576 free (0 chunks); 6616 used
      RI constraint cache: 40888 total in 2 blocks; 2616 free (0 chunks); 38272 used
      Sequence values: 8192 total in 1 blocks; 1576 free (0 chunks); 6616 used
      Btree proof lookup cache: 8192 total in 1 blocks; 552 free (0 chunks); 7640 used
      CFuncHash: 8192 total in 1 blocks; 552 free (0 chunks); 7640 used
      Tsearch dictionary cache: 8192 total in 1 blocks; 1576 free (0 chunks); 6616 used
      Tsearch parser cache: 8192 total in 1 blocks; 1576 free (0 chunks); 6616 used
      Tsearch configuration cache: 8192 total in 1 blocks; 1576 free (0 chunks); 6616 used
      TableSpace cache: 8192 total in 1 blocks; 2088 free (0 chunks); 6104 used
      Type information cache: 24616 total in 2 blocks; 2616 free (0 chunks); 22000 used
      Operator lookup cache: 24576 total in 2 blocks; 10752 free (3 chunks); 13824 used
      RowDescriptionContext: 8192 total in 1 blocks; 6888 free (0 chunks); 1304 used
      MessageContext: 8192 total in 1 blocks; 6888 free (1 chunks); 1304 used
      Operator class cache: 8192 total in 1 blocks; 552 free (0 chunks); 7640 used
      smgr relation table: 131072 total in 5 blocks; 73928 free (19 chunks); 57144 used
      TransactionAbortContext: 32768 total in 1 blocks; 32504 free (0 chunks); 264 used
      Portal hash: 8192 total in 1 blocks; 552 free (0 chunks); 7640 used
      TopPortalContext: 8192 total in 1 blocks; 7928 free (1 chunks); 264 used
      Relcache by OID: 32768 total in 3 blocks; 9448 free (6 chunks); 23320 used
      CacheMemoryContext: 4722032 total in 34 blocks; 1659648 free (1 chunks); 3062384 used
        index info: 3072 total in 2 blocks; 1048 free (1 chunks); 2024 used: e2e_fallback_keys_json_uniqueness
        index info: 2048 total in 2 blocks; 872 free (0 chunks); 1176 used: threepid_validation_token_session_id
        index info: 2048 total in 2 blocks; 904 free (0 chunks); 1144 used: threepid_validation_token_pkey
        index info: 2048 total in 2 blocks; 416 free (1 chunks); 1632 used: device_federation_inbox_sender_id
        index info: 2048 total in 2 blocks; 920 free (0 chunks); 1128 used: room_aliases_id
        index info: 2048 total in 2 blocks; 904 free (0 chunks); 1144 used: room_aliases_room_alias_key
        index info: 2048 total in 2 blocks; 904 free (0 chunks); 1144 used: blocked_rooms_idx
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: pg_toast_17662_index
        index info: 2048 total in 2 blocks; 904 free (0 chunks); 1144 used: room_retention_max_lifetime_idx
        index info: 2048 total in 2 blocks; 608 free (1 chunks); 1440 used: room_retention_pkey
        index info: 2048 total in 2 blocks; 528 free (1 chunks); 1520 used: pg_toast_17486_index
        index info: 3072 total in 2 blocks; 1080 free (1 chunks); 1992 used: room_tag_uniqueness
        CachedPlan: 8192 total in 4 blocks; 3464 free (0 chunks); 4728 used: SELECT 1 FROM ONLY "public"."access_tokens" x WHERE "id" OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF ...
        CachedPlan: 8192 total in 4 blocks; 2864 free (0 chunks); 5328 used: SELECT 1 FROM ONLY "public"."events" x WHERE "event_id" OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x
        index info: 2048 total in 2 blocks; 872 free (0 chunks); 1176 used: event_push_summary_stream_ordering_lock_key
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: ex_outlier_stream_pkey
        index info: 2048 total in 2 blocks; 792 free (0 chunks); 1256 used: group_attestations_renewals_v_idx
        index info: 2048 total in 2 blocks; 792 free (0 chunks); 1256 used: group_attestations_renewals_u_idx
        index info: 2048 total in 2 blocks; 416 free (1 chunks); 1632 used: group_attestations_renewals_g_idx
        index info: 1024 total in 1 blocks; 0 free (0 chunks); 1024 used: monthly_active_users_users
        index info: 1024 total in 1 blocks; 0 free (0 chunks); 1024 used: monthly_active_users_time_stamp
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: user_signature_stream_idx
        index info: 2048 total in 2 blocks; 904 free (0 chunks); 1144 used: ui_auth_sessions_session_id_key
        index info: 3072 total in 2 blocks; 664 free (1 chunks); 2408 used: e2e_one_time_keys_json_uniqueness
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: e2e_room_keys_versions_idx
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: access_tokens_device_id
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: access_tokens_token_key
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: access_tokens_pkey
        CachedPlanSource: 4096 total in 3 blocks; 1416 free (0 chunks); 2680 used: SELECT 1 FROM ONLY "public"."access_tokens" x WHERE "id" OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF ...
          CachedPlanQuery: 4096 total in 3 blocks; 1192 free (1 chunks); 2904 used
        SPI Plan: 1024 total in 1 blocks; 576 free (0 chunks); 448 used
        CachedPlanSource: 4096 total in 3 blocks; 1416 free (0 chunks); 2680 used: SELECT 1 FROM ONLY "public"."events" x WHERE "event_id" OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x
          CachedPlanQuery: 4096 total in 3 blocks; 592 free (0 chunks); 3504 used
        SPI Plan: 1024 total in 1 blocks; 576 free (0 chunks); 448 used
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: pg_toast_17607_index
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: state_events_event_id_key
        index info: 2048 total in 2 blocks; 840 free (0 chunks); 1208 used: evauth_edges_id
        index info: 2048 total in 2 blocks; 792 free (0 chunks); 1256 used: local_current_membership_room_idx
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: local_current_membership_idx
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: pg_toast_17742_index
        TS dictionary: 1024 total in 1 blocks; 688 free (0 chunks); 336 used: simple
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: room_stats_state_room
        index info: 2048 total in 2 blocks; 416 free (1 chunks); 1632 used: device_lists_remote_cache_unique_id
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: user_daily_visits_uts_idx
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: user_daily_visits_ts_idx
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: state_groups_room_id_idx
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: state_groups_pkey
        index info: 3072 total in 2 blocks; 968 free (1 chunks); 2104 used: users_who_share_private_rooms_u_idx
        index info: 2048 total in 2 blocks; 792 free (0 chunks); 1256 used: users_who_share_private_rooms_r_idx
        index info: 2048 total in 2 blocks; 792 free (0 chunks); 1256 used: users_who_share_private_rooms_o_idx
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: users_in_public_rooms_u_idx
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: users_in_public_rooms_r_idx
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: user_directory_search_user_idx
        index info: 4096 total in 3 blocks; 2256 free (2 chunks); 1840 used: user_directory_search_fts_idx
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: user_directory_user_idx
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: user_directory_room_idx
        index info: 2048 total in 2 blocks; 904 free (0 chunks); 1144 used: erased_users_user
        index info: 2048 total in 2 blocks; 904 free (0 chunks); 1144 used: group_attestations_remote_v_idx
        index info: 2048 total in 2 blocks; 904 free (0 chunks); 1144 used: group_attestations_remote_u_idx
        index info: 2048 total in 2 blocks; 608 free (1 chunks); 1440 used: group_attestations_remote_g_idx
        index info: 2048 total in 2 blocks; 656 free (1 chunks); 1392 used: group_roles_group_id_role_id_key
        index info: 3072 total in 2 blocks; 1048 free (1 chunks); 2024 used: group_summary_roles_group_id_role_id_role_order_key
        index info: 2048 total in 2 blocks; 904 free (0 chunks); 1144 used: group_summary_users_g_idx
        index info: 2048 total in 2 blocks; 656 free (1 chunks); 1392 used: group_room_categories_group_id_category_id_key
        index info: 3072 total in 2 blocks; 1048 free (1 chunks); 2024 used: group_summary_room_categories_group_id_category_id_cat_orde_key
        index info: 3072 total in 2 blocks; 1080 free (1 chunks); 1992 used: group_summary_rooms_g_idx
        index info: 3072 total in 2 blocks; 1064 free (1 chunks); 2008 used: group_summary_rooms_group_id_category_id_room_id_room_order_key
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: group_rooms_r_idx
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: group_rooms_g_idx
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: user_stats_historical_end_ts
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: user_stats_historical_pkey
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: user_stats_current_pkey
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: current_state_delta_stream_idx
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: device_federation_outbox_id
        index info: 2048 total in 2 blocks; 416 free (1 chunks); 1632 used: device_federation_outbox_destination_id
        index info: 2048 total in 2 blocks; 416 free (1 chunks); 1632 used: device_lists_outbound_pokes_user
        index info: 2048 total in 2 blocks; 792 free (0 chunks); 1256 used: device_lists_outbound_pokes_stream
        index info: 2048 total in 2 blocks; 528 free (1 chunks); 1520 used: device_lists_outbound_pokes_id
        index info: 2048 total in 2 blocks; 792 free (0 chunks); 1256 used: cache_invalidation_stream_by_instance_id
        index info: 2048 total in 2 blocks; 792 free (0 chunks); 1256 used: e2e_cross_signing_keys_stream_idx
        index info: 3072 total in 2 blocks; 840 free (1 chunks); 2232 used: e2e_cross_signing_keys_idx
        index info: 2048 total in 2 blocks; 920 free (0 chunks); 1128 used: groups_idx
        index info: 3072 total in 2 blocks; 840 free (1 chunks); 2232 used: event_relations_relates
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: event_relations_id
        index info: 2048 total in 2 blocks; 528 free (1 chunks); 1520 used: local_group_membership_u_idx
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: local_group_membership_g_idx
        index info: 3072 total in 2 blocks; 840 free (1 chunks); 2232 used: server_keys_json_uniqueness
        index info: 2048 total in 2 blocks; 872 free (0 chunks); 1176 used: private_user_data_max_stream_id_lock_key
        index info: 3072 total in 2 blocks; 696 free (1 chunks); 2376 used: event_txn_id_txn_id
        index info: 2048 total in 2 blocks; 840 free (0 chunks); 1208 used: event_txn_id_ts
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: event_txn_id_event_id
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: stream_positions_idx
        index info: 3072 total in 2 blocks; 696 free (1 chunks); 2376 used: e2e_room_keys_with_version_idx
        index info: 3072 total in 2 blocks; 840 free (1 chunks); 2232 used: device_inbox_user_stream_id
        index info: 2048 total in 2 blocks; 448 free (1 chunks); 1600 used: device_inbox_stream_id_user_id
        index info: 2048 total in 2 blocks; 608 free (1 chunks); 1440 used: room_tag_revisions_uniqueness
        index info: 4096 total in 3 blocks; 2184 free (2 chunks); 1912 used: room_memberships_user_room_forgotten
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: room_memberships_user_id
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: room_memberships_room_id
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: room_memberships_event_id_key
        index info: 3544 total in 3 blocks; 432 free (0 chunks); 3112 used: remote_media_repository_thumbn_media_origin_id_width_height_met
        index info: 2048 total in 2 blocks; 824 free (0 chunks); 1224 used: public_room_index
        195 more child contexts containing 421096 total in 376 blocks; 131848 free (135 chunks); 289248 used
      WAL record construction: 49768 total in 2 blocks; 6360 free (0 chunks); 43408 used
      PrivateRefCount: 8192 total in 1 blocks; 2616 free (0 chunks); 5576 used
      MdSmgr: 16384 total in 2 blocks; 3032 free (6 chunks); 13352 used
      LOCALLOCK hash: 32768 total in 3 blocks; 16824 free (8 chunks); 15944 used
      Timezones: 104120 total in 2 blocks; 2616 free (0 chunks); 101504 used
      ErrorContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
    Grand total: 6187280 bytes in 671 blocks; 2143936 free (325 chunks); 4043344 used
    

    So, something in our instance of synapse makes it query sometihng from postgres that will never get released or accumulates over time, but I can't figure out what. Debugging help wanted, if there is no known things to do.

    ADD: While rootissue is exactly synapse postgres growing, we also deploy following bridges on same server, should they bear anything in this equation:

    mautrix-telegram mautrix-whatsapp mautrix-facebook mautrix-signal SmsMatrix bot appservice-slack appservice-discord

    S-Major T-Defect A-Database 
    opened by olmari 49
  • Synapse uses TLS1.0 for smtp which is rejected by some mail servers

    Synapse uses TLS1.0 for smtp which is rejected by some mail servers

    Description

    Requesting a password reset from a brand-new Synapse installation returns a 500 error, with the error twisted.mail._except.SMTPConnectError: Unable to connect to server.

    Steps to reproduce

    • On a vanilla homeserver, add the following configuration to homeserver.yaml:
    email:
      enable_notifs: false
      smtp_host: [hostname or ip]
      smtp_port: 587
      smtp_user: [username]
      smtp_pass: [password]
      notif_from: "Your friendly %(app)s Home Server <[email]>"
      app_name: Matrix
    
    • Restart synapse to apply changes
    • Using riot, change the homeserver url and then select "Set a new password"
    • Enter the valid email address and a new password
    • Select "Send Reset Email"

    After the last step, the server will respond with a 500 error, and the following will be displayed in synapse's log:

    Oct 17 15:19:00 [hostname] synapse[11936]: synapse.handlers.identity: [POST-49] Error sending threepid validation email to [email]
                                                    Traceback (most recent call last):
                                                      File "/nix/store/1al2bnj8f2y66jxmzhi00aw3a7wp1jgw-matrix-synapse-1.4.0/lib/python3.7/site-packages/synapse/handlers/identity.py", line 347, in send_threepid_validation
                                                        yield send_email_func(email_address, token, client_secret, session_id)
                                                    twisted.mail._except.SMTPConnectError: Unable to connect to server.
    

    And this is displayed in the postfix log of the receiving server:

    Oct 17 15:19:00 [hostname] postfix/smtpd[2546]: connect from unknown[ip]
    Oct 17 15:19:00 [hostname] postfix/smtpd[2546]: SSL_accept error from unknown[ip]: -1
    Oct 17 15:19:00 [hostname] postfix/smtpd[2546]: warning: TLS library problem: error:14209102:SSL routines:tls_early_post_process_client_hello:unsupported protocol:ssl/statem/statem_srvr.c:1661:
    Oct 17 15:19:00 [hostname] postfix/smtpd[2546]: lost connection after STARTTLS from unknown[ip]
    Oct 17 15:19:00 [hostname] postfix/smtpd[2546]: disconnect from unknown[ip] ehlo=1 starttls=0/1 commands=1/2
    

    I've tested this configuration with both require_transport_security: false and require_transport_security: true. Also worth mentioning that the username / password are correct, as logging into the mail server from a mail program and sending a test email from there works fine.

    Version information

    New personal homeserver running synapse.

    • Version: 1.4.0

    • Install method: Package Manager

    • Platform: NixOS running on Hetzner Cloud VM for both Matrix and mail server
    z-bug z-p2 Z-Upstream-Bug 
    opened by gjabell 46
  • Redact all events older than a certain time

    Redact all events older than a certain time

    We could add this to the prune API. So additionally, when you prune a room, you can also redact all those events, so all content is removed on federated rooms too

    opened by rubo77 46
  • Lock Dependencies in Synapse

    Lock Dependencies in Synapse

    Hi folks,

    Right now most of Synapse's dependencies only declare a minimum version bound, like Twisted>=18.9.0 (cite). This means that every time we build release artifacts or install Synapse from source, we unconditionally pull in the latest versions of our dependencies at that moment.

    This creates unnecessary risk, as our dependencies can change out from under us without warning. For example, installing Synapse 1.49.0 may work today but fail tomorrow if one of our dependencies releases a new version overnight.

    This exact scenario bit us with the release of attrs 21.1.0 (synapse#9936) on May 6th. We were forced to release Synapse 1.33.1 less than a day after 1.33.0 as the attrs release broke our ability to install Synapse from source, build working Debian packages, or create functioning Docker images, even though nothing in our repositories had changed.

    The absence of locked dependencies also impedes our ability to pursue continuous delivery, maintain LTS releases, or easily backport security fixes as we cannot recreate older release artifacts without also implicitly updating all of the dependencies included therein.

    Definition of Done

    • For any given Synapse commit, it is possible to repeatably create identical virtualenvs.

    Further Discussion / Constraints

    Resolving this implies that it must be possible to enumerate exact versions of all dependencies included in any given upstream release of Synapse, using only a clone of the Synapse repository. This is important for auditing, as it allows us to easily answer questions like "did we ever ship a release with a vulnerable version of that dependency?"

    Moreover, any solution must record hash digests to protect against remote tampering, such as with pip's hash-checking mode.

    To ease maintenance burden (and avail of GitHub's supply chain security features), it would be nice if whatever solution we arrived at integrated with Dependabot. Supported package ecosystems for Python are requirements.txt (pip / pip-tools), pipfile.lock (Pipenv), and poetry.lock (Poetry).

    z-p2 T-Task 
    opened by callahad 45
  • Youtube captions (link previews) are useless

    Youtube captions (link previews) are useless

    Description

    At some point Youtube has updated the site and now all (?) captions generated by Synapse for the site are:

    Before you continue to YouTube Sign in a Google company Before you continue to YouTube Google uses cookies and data to: Deliver and maintain services, like tracking outages and protecting against spam, fraud, and abuse Measure audience engagement and site statistics to understand how our services are used

    This is basically useless considering the primary point of the function, in particular in the case of a very popular website.

    Steps to reproduce

    • send a Youtube URL m.room.message into a room, e.g. https://www.youtube.com/watch?v=RzJf02TIqxk
    • wait for Synapse to produce a caption for the link
    • witness the caption to contain no information about the actual link :)

    Expected results:

    • A descriptive message about the contents, such as the one produced by an up-to-date youtube-dl --get-description:

    Authentic recordings from inside Hetzner Online's data center park Just like birds and insects, each server sings its own unique song.

    Version information

    • Homeserver: matrix.org
    S-Minor T-Defect 
    opened by eras 45
  • [Request] Allow account deletion

    [Request] Allow account deletion

    I've looked over the matrix spec and unfortunately /account/deactivate only requires for any future login to become impossible. Synapse fulfils the spec, but does not actually delete the account, which a) leaves the user's data on the server after the user has expressed the intention to never use this account again b) leaves the user id taken (i.e. future registration attempts on it fail with "M_USER_IN_USE")

    Either /account/deactivate should completely wipe any data belonging to that account from the homeserver, or there should be a second endpoint /account/delete that does this.

    Keeping account data from people who specifically want to cease using one's service seems shady at best in terms of privacy policy and technically impractical at worst, since it accumulates data that a) isn't required for the continued function of the service b) takes up increasing storage space for no apparent benefit to the service

    opened by MoritzMaxeiner 44
  • Add the ability to configure which workers will issue `POST /_matrix/key/v2/query` requests

    Add the ability to configure which workers will issue `POST /_matrix/key/v2/query` requests

    When an operation that involves verifying a remote homeserver's signature occurs on a worker, that worker will attempt to locate both the corresponding public key as well as the valid_until_ts attribute of that key. It will first check the database for a local copy, then if it can't find it will make an HTTP request to either a configured trusted_key_server, or the origin homeserver itself.

    This puts a requirement for outbound federation requests from workers that would normally never otherwise need to contact other homeservers over federation. Ideally only a federation_sender (and potentially the main process) would need that ability. Therefore, it would be nice if a given set of worker names could be configured as those that should go out and fetch keys from remote locations, for which other workers not in the set would then make a HTTP replication request to perform the operation. This has implications for high-security network environments.

    As a workaround, workers that syadmins do not wish to make these outbound requests can be individually configured with a trusted_key-servers entry of the local server_name. Those workers will then make a request to the load balancer, which will then be directed back to a worker that is authorised for outbound federation traffic. It is not an ideal solution however, as this would require both the original worker and the designated federation worker to perform a check against the local database, whereas an incoming replication request handler could assume that the local database had already been consulted.

    A-Federation A-Workers A-Config T-Enhancement 
    opened by anoadragon453 0
  • Add some clarifying comments and refactor a portion of the `Keyring` class for readability

    Add some clarifying comments and refactor a portion of the `Keyring` class for readability

    This caused me a great deal of confusion both back when writing https://github.com/matrix-org/synapse/issues/12767 and now. I've added some clarifying comments and refactoring some of the logic in the Keyring class.

    Logic should not have changed, but please double-check. Closes https://github.com/matrix-org/synapse/issues/12767.

    opened by anoadragon453 0
  • Add `tag` to `listeners` documentation

    Add `tag` to `listeners` documentation

    Closes: #14746

    Pull Request Checklist

    • [x] Pull request is based on the develop branch
    • [x] Pull request includes a changelog file. The entry should:
      • Be a short description of your change which makes sense to users. "Fixed a bug that prevented receiving messages from other servers." instead of "Moved X method from EventStore to EventWorkerStore.".
      • Use markdown where necessary, mostly for code blocks.
      • End with either a period (.) or an exclamation mark (!).
      • Start with a capital letter.
      • Feel free to credit yourself, by adding a sentence "Contributed by @github_username." or "Contributed by [Your Name]." to the end of the entry.
    • [x] Pull request includes a sign off
    • [x] Code style is correct (run the linters)

    Signed-off-by: Dirk Klimpel [email protected]

    opened by dklimpel 0
  • populate_stats_process_rooms failing with an unknown room

    populate_stats_process_rooms failing with an unknown room

    As part of #14643 we are re-populating room and user stats, this is currently failing with an error:

    Room !XXX for event $YYY is unknown
    
    Stack trace
    2023-01-06 17:45:04,582 - synapse.storage.background_updates - 431 - INFO - background_updates-0 - Starting update batch on background update 'populate_stats_process_rooms'
    2023-01-06 17:45:04,725 - synapse.storage.background_updates - 302 - ERROR - background_updates-0 - Error doing update
    Capture point (most recent call last):
      File "venv/lib/python3.8-pyston2.3/runpy.py", line 194, in _run_module_as_main
        return _run_code(code, main_globals, None,
      File "venv/lib/python3.8-pyston2.3/runpy.py", line 87, in _run_code
        exec(code, run_globals)
      File "src/synapse/app/homeserver.py", line 386, in <module>
        main()
      File "src/synapse/app/homeserver.py", line 382, in main
        run(hs)
      File "src/synapse/app/homeserver.py", line 361, in run
        _base.start_reactor(
      File "src/synapse/app/_base.py", line 191, in start_reactor
        run()
      File "src/synapse/app/_base.py", line 173, in run
        run_command()
      File "src/synapse/app/_base.py", line 148, in <lambda>
        run_command: Callable[[], None] = lambda: reactor.run(),
      File "venv/site-packages/twisted/internet/base.py", line 1318, in run
        self.mainLoop()
      File "venv/site-packages/twisted/internet/base.py", line 1328, in mainLoop
        reactorBaseSelf.runUntilCurrent()
      File "venv/site-packages/twisted/internet/base.py", line 967, in runUntilCurrent
        f(*a, **kw)
      File "src/synapse/storage/databases/main/events_worker.py", line 1185, in fire
        d.callback(row_dict)
      File "venv/site-packages/twisted/internet/defer.py", line 696, in callback
        self._startRunCallbacks(result)
      File "venv/site-packages/twisted/internet/defer.py", line 798, in _startRunCallbacks
        self._runCallbacks()
      File "venv/site-packages/twisted/internet/defer.py", line 892, in _runCallbacks
        current.result = callback(  # type: ignore[misc]
      File "venv/site-packages/twisted/internet/defer.py", line 1792, in gotResult
        _inlineCallbacks(r, gen, status, context)
      File "venv/site-packages/twisted/internet/defer.py", line 1775, in _inlineCallbacks
        status.deferred.errback()
      File "venv/site-packages/twisted/internet/defer.py", line 735, in errback
        self._startRunCallbacks(fail)
      File "venv/site-packages/twisted/internet/defer.py", line 798, in _startRunCallbacks
        self._runCallbacks()
      File "venv/site-packages/twisted/internet/defer.py", line 892, in _runCallbacks
        current.result = callback(  # type: ignore[misc]
      File "venv/site-packages/twisted/internet/defer.py", line 735, in errback
        self._startRunCallbacks(fail)
      File "venv/site-packages/twisted/internet/defer.py", line 798, in _startRunCallbacks
        self._runCallbacks()
      File "venv/site-packages/twisted/internet/defer.py", line 892, in _runCallbacks
        current.result = callback(  # type: ignore[misc]
      File "venv/site-packages/twisted/internet/defer.py", line 1792, in gotResult
        _inlineCallbacks(r, gen, status, context)
      File "venv/site-packages/twisted/internet/defer.py", line 1693, in _inlineCallbacks
        result = context.run(
      File "venv/site-packages/twisted/python/failure.py", line 518, in throwExceptionIntoGenerator
        return g.throw(self.type, self.value, self.tb)
    Traceback (most recent call last):
      File "src/synapse/storage/background_updates.py", line 294, in run_background_updates
        result = await self.do_next_background_update(sleep)
      File "src/synapse/storage/background_updates.py", line 424, in do_next_background_update
        await self._do_background_update(desired_duration_ms)
      File "src/synapse/storage/background_updates.py", line 467, in _do_background_update
        items_updated = await update_handler(progress, batch_size)
      File "src/synapse/storage/databases/main/stats.py", line 206, in _populate_stats_process_rooms
        await self._calculate_and_set_initial_state_for_room(room_id)
      File "src/synapse/storage/databases/main/stats.py", line 557, in _calculate_and_set_initial_state_for_room
        state_event_map = await self.get_events(event_ids, get_prev_content=False)  # type: ignore[attr-defined]
      File "src/synapse/storage/databases/main/events_worker.py", line 536, in get_events
        events = await self.get_events_as_list(
      File "src/synapse/logging/opentracing.py", line 896, in _wrapper
        return await func(*args, **kwargs)  # type: ignore[misc]
      File "src/synapse/logging/opentracing.py", line 896, in _wrapper
        return await func(*args, **kwargs)  # type: ignore[misc]
      File "src/synapse/storage/databases/main/events_worker.py", line 586, in get_events_as_list
        event_entry_map = await self.get_unredacted_events_from_cache_or_db(
      File "src/synapse/storage/databases/main/events_worker.py", line 818, in get_unredacted_events_from_cache_or_db
        missing_events: Dict[str, EventCacheEntry] = await delay_cancellation(
      File "venv/site-packages/twisted/internet/defer.py", line 1697, in _inlineCallbacks
        result = context.run(gen.send, result)
      File "src/synapse/storage/databases/main/events_worker.py", line 804, in get_missing_events_from_cache_or_db
        raise e
      File "src/synapse/storage/databases/main/events_worker.py", line 797, in get_missing_events_from_cache_or_db
        db_missing_events = await self._get_events_from_db(
      File "src/synapse/storage/databases/main/events_worker.py", line 1297, in _get_events_from_db
        raise Exception(
    Exception: Room !XXX for event $YYY is unknown
    

    This exception is raised at:

    https://github.com/matrix-org/synapse/blob/db1cfe9c80a707995fcad8f3faa839acb247068a/synapse/storage/databases/main/events_worker.py#L1302-L1304

    It happens when attempting to fetch a non-membership event from a room whose room version is unknown (room_version IS NULL in the rooms table).

    Some history:

    • The room_version column (and populating it) was added in #6729 ~3 years ago, but this skips any room that doesn't have a m.room.create event in the current_state_events table.
    • #6874 then handles using the room_version column when reading events.
    • There was then a fix-up for this in #7070 where we also try to pull the room version from state_events (as opposed to current_state_events).
      • @richvdh noticed in review that this will null the room_version for rooms that don't have a create event: https://github.com/matrix-org/synapse/pull/7070/files#r393343338, but notes it should be OK since #6874 handles this case (of out-of-band memberships).
    S-Major T-Defect A-Background-Updates O-Occasional 
    opened by clokep 2
  • Add index to improve performance of the `/timestamp_to_event` endpoint used for jumping to a specific date in the timeline of a room.

    Add index to improve performance of the `/timestamp_to_event` endpoint used for jumping to a specific date in the timeline of a room.

    Follows: #9445, #14215

    Base: develop

    Original commit schedule, with full messages:

    1. Add index to events table to help with 'jump to date'
      Before

      synapse=> explain             SELECT event_id FROM events
                  LEFT JOIN rejections USING (event_id)
                  WHERE
                      room_id = '!...:matrix.org'
                      AND origin_server_ts <= 1620172800000
                      /**
                       * Make sure the event isn't an `outlier` because we have no way
                       * to later check whether it's next to a gap. `outliers` do not
                       * have entries in the `event_edges`, `event_forward_extremeties`,
                       * and `event_backward_extremities` tables to check against
                       * (used by `is_event_next_to_backward_gap` and `is_event_next_to_forward_gap`).
                       */
                      AND NOT outlier
                      /* Make sure event is not rejected */
                      AND rejections.event_id IS NULL
                  /**
                   * First sort by the message timestamp. If the message timestamps are the
                   * same, we want the message that logically comes "next" (before/after
                   * the given timestamp) based on the DAG and its topological order (`depth`).
                   * Finally, we can tie-break based on when it was received on the server
                   * (`stream_ordering`).
                   */
                  ORDER BY origin_server_ts DESC, depth DESC, stream_ordering DESC
                  LIMIT 1;
                                                          QUERY PLAN
      ------------------------------------------------------------------------------------------------------------------
       Limit  (cost=1075.38..2148.31 rows=1 width=66)
         ->  Incremental Sort  (cost=1075.38..650197.88 rows=605 width=66)
               Sort Key: events.origin_server_ts DESC, events.depth DESC, events.stream_ordering DESC
               Presorted Key: events.origin_server_ts
               ->  Nested Loop Anti Join  (cost=0.71..650170.66 rows=605 width=66)
                     ->  Index Scan Backward using events_ts on events  (cost=0.43..649835.91 rows=605 width=66)
                           Index Cond: (origin_server_ts <= '1620172800000'::bigint)
                           Filter: ((NOT outlier) AND (room_id = '!...:matrix.org'::text))
                     ->  Index Only Scan using rejections_event_id_key on rejections  (cost=0.28..0.55 rows=1 width=41)
                           Index Cond: (event_id = events.event_id)
      

      Index

      synapse=> create index rei_jtd_idx ON events(room_id, origin_server_ts) WHERE not outlier;
      CREATE INDEX
      synapse=> explain             SELECT event_id FROM events
                  LEFT JOIN rejections USING (event_id)
                  WHERE
                      room_id = '!...:matrix.org'
                      AND origin_server_ts <= 1620172800000
                      /**
                       * Make sure the event isn't an `outlier` because we have no way
                       * to later check whether it's next to a gap. `outliers` do not
                       * have entries in the `event_edges`, `event_forward_extremeties`,
                       * and `event_backward_extremities` tables to check against
                       * (used by `is_event_next_to_backward_gap` and `is_event_next_to_forward_gap`).
                       */
                      AND NOT outlier
                      /* Make sure event is not rejected */
                      AND rejections.event_id IS NULL
                  /**
                   * First sort by the message timestamp. If the message timestamps are the
                   * same, we want the message that logically comes "next" (before/after
                   * the given timestamp) based on the DAG and its topological order (`depth`).
                   * Finally, we can tie-break based on when it was received on the server
                   * (`stream_ordering`).
                   */
                  ORDER BY origin_server_ts DESC, depth DESC, stream_ordering DESC
                  LIMIT 1;
                                                                     QUERY PLAN
      ----------------------------------------------------------------------------------------------------------------------------------------
       Limit  (cost=5.44..10.08 rows=1 width=66)
         ->  Incremental Sort  (cost=5.44..2819.10 rows=607 width=66)
               Sort Key: events.origin_server_ts DESC, events.depth DESC, events.stream_ordering DESC
               Presorted Key: events.origin_server_ts
               ->  Nested Loop Anti Join  (cost=0.83..2791.78 rows=607 width=66)
                     ->  Index Scan Backward using rei_jtd_idx on events  (cost=0.56..2456.44 rows=607 width=66)
                           Index Cond: ((room_id = '!...:matrix.org'::text) AND (origin_server_ts <= '1620172800000'::bigint))
                     ->  Index Only Scan using rejections_event_id_key on rejections  (cost=0.28..0.55 rows=1 width=41)
                           Index Cond: (event_id = events.event_id)
      
    opened by reivilibre 0
  • Improved confirm_localpart

    Improved confirm_localpart

    Description:

    For OIDC, you can enable the confirm_localpart option so people can confirm their username.

    But sometimes, for instance, you want users to be able to change their display name but not their email address, maybe also not their localpart either.

    A drawback of this option is that it is only available for OIDC and not for SAML for instance (unsure if it's there for CAS). So, this feature request is 2 things:

    (1) Make what specific user attributes have to be confirmed on signup more configurable (2) Make this feature available for anyone using SSO, and not just OIDC people

    I have never contributed to Synapse before, but I'm willing to try my shot at making a PR for this.

    A-SSO S-Tolerable T-Enhancement O-Occasional 
    opened by gabrc52 0
Releases(v1.74.0)
Owner
matrix.org
A new basis for open, interoperable, decentralised real-time communication
matrix.org
A Python wrapper for Matrix Synapse admin API

Synapse-admin-api-python A Python wrapper for Matrix Synapse admin API. Versioning This library now supports up to Synapse 1.45.0, any Admin API intro

Knugi 9 Sep 28, 2022
My homeserver setup. Everything managed securely using Portainer.

homeserver-traefik-portainer Features: access all services with free TLS from letsencrypt using your own domain running a side project is super simple

Tomasz Wójcik 44 Jan 3, 2023
Companion "receiver" to matrix-appservice-webhooks for [matrix].

Matrix Webhook Receiver Companion "receiver" to matrix-appservice-webhooks for [matrix]. The purpose of this app is to listen for generic webhook mess

Kim Brose 13 Sep 29, 2022
Command line tool for monitoring changes of File entities scoped in a Synapse File View

Synapse Monitoring Provides tools for monitoring and keeping track of File entity changes in Synapse with the use of File Views. Learn more about File

Sage Bionetworks 3 May 28, 2022
Programmatic interface to Synapse services for Python

A Python client for Sage Bionetworks' Synapse, a collaborative, open-source research platform that allows teams to share data, track analyses, and collaborate

Sage Bionetworks 54 Dec 23, 2022
E2EE disabling plugin for Synapse

E2EE disabling plugin for Synapse This Pluggable Module disables end-to-end encryption in a self-hosted Synapse servers. It works by stripping out req

Konstantin Sharlaimov 9 Nov 30, 2022
PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization using Augmented-Self Reference and Dense Semantic Correspondence) and pre-trained model on ImageNet dataset

Reference-Based-Sketch-Image-Colorization-ImageNet This is a PyTorch implementation of CVPR 2020 paper (Reference-Based Sketch Image Colorization usin

Yuzhi ZHAO 11 Jul 28, 2022
This repository contains numerical implementation for the paper Intertemporal Pricing under Reference Effects: Integrating Reference Effects and Consumer Heterogeneity.

This repository contains numerical implementation for the paper Intertemporal Pricing under Reference Effects: Integrating Reference Effects and Consumer Heterogeneity.

Hansheng Jiang 6 Nov 18, 2022
Multi-class confusion matrix library in Python

Table of contents Overview Installation Usage Document Try PyCM in Your Browser Issues & Bug Reports Todo Outputs Dependencies Contribution References

Sepand Haghighi 1.3k Dec 31, 2022
[ICLR 2021] Is Attention Better Than Matrix Decomposition?

Enjoy-Hamburger ?? Official implementation of Hamburger, Is Attention Better Than Matrix Decomposition? (ICLR 2021) Under construction. Introduction T

Gsunshine 271 Dec 29, 2022
STUMPY is a powerful and scalable Python library for computing a Matrix Profile, which can be used for a variety of time series data mining tasks

STUMPY STUMPY is a powerful and scalable library that efficiently computes something called the matrix profile, which can be used for a variety of tim

TD Ameritrade 2.5k Jan 6, 2023
A Python library for detecting patterns and anomalies in massive datasets using the Matrix Profile

matrixprofile-ts matrixprofile-ts is a Python 2 and 3 library for evaluating time series data using the Matrix Profile algorithms developed by the Keo

Target 696 Dec 26, 2022
PyTorch Extension Library of Optimized Autograd Sparse Matrix Operations

PyTorch Sparse This package consists of a small extension library of optimized sparse matrix operations with autograd support. This package currently

Matthias Fey 757 Jan 4, 2023
The goal of this library is to generate more helpful exception messages for numpy/pytorch matrix algebra expressions.

Tensor Sensor See article Clarifying exceptions and visualizing tensor operations in deep learning code. One of the biggest challenges when writing co

Terence Parr 704 Dec 14, 2022
Multivariate imputation and matrix completion algorithms implemented in Python

A variety of matrix completion and imputation algorithms implemented in Python 3.6. To install: pip install fancyimpute Do not use conda. We don't sup

Alex Rubinsteyn 1.1k Dec 18, 2022
Heisenbridge a bouncer-style Matrix IRC bridge

Heisenbridge brings IRC to Matrix by creating an environment where every user connects to each network individually like they would with a traditional IRC bouncer

Toni Spets 152 Dec 28, 2022
A maubot plugin to invite users to Matrix rooms according to LDAP groups

LDAP Inviter Bot This is a maubot plugin that invites users to Matrix rooms according to their membership in LDAP groups.

David Mehren 14 Dec 9, 2022
Graph-based community clustering approach to extract protein domains from a predicted aligned error matrix

Using a predicted aligned error matrix corresponding to an AlphaFold2 model , returns a series of lists of residue indices, where each list corresponds to a set of residues clustering together into a pseudo-rigid domain.

Tristan Croll 24 Nov 23, 2022
A dot matrix rendered using braille characters.

⣿ dotmatrix A dot matrix rendered using braille characters. Description This library provides class called Matrix which represents a dot matrix that c

Tim Fischer 25 Dec 12, 2022
An extension for asreview implements a version of the tf-idf feature extractor that saves the matrix and the vocabulary.

Extension - matrix and vocabulary extractor for TF-IDF and Doc2Vec An extension for ASReview that adds a tf-idf extractor that saves the matrix and th

ASReview 4 Jun 17, 2022