An uncommon but somewhat difficult to digagnose issue is dealing with
improperly-seeded databases. In such cases, instance-signed fetches will
fail with a ActiveRecord::RecordNotFound error, usually caught and handled
as generic 404, leading people to think the remote resource itself has not
been found, while it's the local instance actor that does not exist.
This commit changes the code so that failure to find the instance actor
automatically creates a new one, so that improperly-seeded databases do
not cause any issue.
Database serialization failure occurs when a read-replica is used
and a query takes long enough that rows on the primary database
become unavailable. It should return HTTP 503 as it is temporary.
Re-order rescue definitions according to their status codes
* Fix crash on receiving requests with missing Digest header
Return an error pointing out that Digest is missing, instead of crashing.
Fixes#15743
* Fix from review feedback
This also skips fetching the actor completely.
This will be useful if we end up distributing Update activities linked to
account suspensions more widely (they are currently only delivered to
the suspended account's followers), as currently, instances not knowing
about the suspended account would fetch it to then process the suspension.
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
* Add honeypot fields to limit non-specialized spam
Add two honeypot fields: a fake website input and a fake password confirmation
one. The label/placeholder/aria-label tells not to fill them, and they are
hidden in CSS, so legitimate users should not fall into these.
This should cut down on some non-Mastodon-specific spambots.
* Require a 3 seconds delay before submitting the registration form
* Fix tests
* Move registration form time check to model validation
* Give people a chance to clear the honeypot fields
* Refactor honeypot translation strings
Co-authored-by: Claire <claire.github-309c@sitedethib.com>
If someone tries logging in to an account and is prompted for a 2FA
code or sign-in token, even if the account's password or e-mail is
updated in the meantime, the session will show the prompt and allow
the login process to complete with a valid 2FA code or sign-in token
* Add more specific error message when request body digest is invalid
This may help other implementors debug their implementation.
* Relax Host parameter requirement to GET requests
The only POST requests processed by Mastodon need objects/actors (including
their host) to be explicitly mentioned in the request's body, so replaying
a legitimate request to another host should not be a security issue.
* Support Digest headers using multiple algorithms or lowercase alogirthm names
* Add support for followers synchronization on the receiving end
Check the `collectionSynchronization` attribute on `Create` and `Announce`
activities and synchronize followers from provided collection if possible.
* Add tests for followers synchronization on the receiving end
* Add support for follower synchronization on the sender's end
* Add tests for the sending end
* Switch from AS attributes to HTTP header
Replace the custom `collectionSynchronization` ActivityStreams attribute by
an HTTP header (`X-AS-Collection-Synchronization`) with the same syntax as
the `Signature` header and the following fields:
- `collectionId` to specify which collection to synchronize
- `digest` for the SHA256 hex-digest of the list of followers known on the
receiving instance (where “receiving instance” is determined by accounts
sharing the same host name for their ActivityPub actor `id`)
- `url` of a collection that should be fetched by the instance actor
Internally, move away from the webfinger-based `domain` attribute and use
account `uri` prefix to group accounts.
* Add environment variable to disable followers synchronization
Since the whole mechanism relies on some new preconditions that, in some
extremely rare cases, might not be met, add an environment variable
(DISABLE_FOLLOWERS_SYNCHRONIZATION) to disable the mechanism altogether and
avoid followers being incorrectly removed.
The current conditions are:
1. all managed accounts' actor `id` and inbox URL have the same URI scheme and
netloc.
2. all accounts whose actor `id` or inbox URL share the same URI scheme and
netloc as a managed account must be managed by the same Mastodon instance
as well.
As far as Mastodon is concerned, breaking those preconditions require extensive
configuration changes in the reverse proxy and might also cause other issues.
Therefore, this environment variable provides a way out for people with highly
unusual configurations, and can be safely ignored for the overwhelming majority
of Mastodon administrators.
* Only set follower synchronization header on non-public statuses
This is to avoid unnecessary computations and allow Follow-related
activities to be handled by the usual codepath instead of going through
the synchronization mechanism (otherwise, any Follow/Undo/Accept activity
would trigger the synchronization mechanism even if processing the activity
itself would be enough to re-introduce synchronization)
* Change how ActivityPub::SynchronizeFollowersService handles follow requests
If the remote lists a local follower which we only know has sent a follow
request, consider the follow request as accepted instead of sending an Undo.
* Integrate review feeback
- rename X-AS-Collection-Synchronization to Collection-Synchronization
- various minor refactoring and code style changes
* Only select required fields when computing followers_hash
* Use actor URI rather than webfinger domain in synchronization endpoint
* Change hash computation to be a XOR of individual hashes
Makes it much easier to be memory-efficient, and avoid sorting discrepancy issues.
* Marginally improve followers_hash computation speed
* Further improve hash computation performances by using pluck_each
* Add bell button
Fix#4890
* Remove duplicate type from post-deployment migration
* Fix legacy class type mappings
* Improve query performance with better index
* Fix validation
* Remove redundant index from notifications
* Do not serve account actors at all in limited federation mode
When an account is fetched without a signature from an allowed instance,
return an error.
This isn't really an improvement in security, as the only information that was
previously returned was required protocol-level info, and the only personal bit
was the existence of the account. The existence of the account can still be
checked by issuing a webfinger query, as those are accepted without signatures.
However, this change makes it so that unallowed instances won't create account
records on their end when they find a reference to an unknown account.
The previous behavior of rendering a limited list of fields, instead of not
rendering the actor at all, was in order to prevent situations in which two
instances in Authorized Fetch mode or Limited Federation mode would fail to
reach each other because resolving an account would require a signed query…
from an account which can only be fetched with a signed query itself. However,
this should now be fine as fetching accounts is done by signing on behalf of
the special instance actor, which does not require any kind of valid signature
to be fetched.
* Fix tests
* Add database support for list show-reply preferences
* Add backend support to read and update list-specific show_replies settings
* Add basic UI to set list replies setting
* Add specs for list replies policy
* Switch "cycling" reply policy link to a set of radio inputs
* Capitalize replies_policy strings
* Change radio button design to be consistent with that of the directory explorer
* Make Array-creation behavior of Paginable more predictable
Paginable.paginate_by_id usually returns ActiveRecord::Relation, but it
returns an Array if min_id option is present. The behavior caused problems
fixed with the following commits:
- 552e886b64
- b63ede5005
- 64ef37b89d
To prevent from recurring similar problems, this commit introduces two
changes:
- The scope now always returns an Array whether min_id option is present
or not.
- The scope is renamed to to_a_paginated_by_id to clarify it returns an
Array.
* Transform Paginable.to_a_paginated_by_id from a scope to a class method
https://api.rubyonrails.org/classes/ActiveRecord/Scoping/Named/ClassMethods.html#method-i-scope
> The method is intended to return an ActiveRecord::Relation object, which
> is composable with other scopes.
Paginable.to_a_paginated_by_id returns an Array and is not appropriate
as a scope.
* Replace incorrect use of distinct with group
Some uses of ActiveRecord::QueryMethods#distinct pass field names but they
are incorrect for the current version of Rails.
ActiveRecord::QueryMethods#group provides the expected behavior and
benefits performance. See commit 6da24aad4cafdef8d8a2c92bac2002a5fc2fe9c8.
* Introduce ApplicationController#cache_collection_paginated_by_id
ApplicationController#cache_collection_paginated_by_id fuses
ApplicationController#cache_collection and Paginable.paginate_by_id.
An advantage of this method is that it prevents from modifying scope which
Paginable.paginate_by_id may provide.
ApplicationController#cache_collection always return an array and there
is no possibility of the scope modification. It is also clear for a
programmer, considering the implication of "cache".
This method can also emit more efficient queries by using
Cacheable.cache_ids before calling Paginable.paginate_by_id.
Some uses of ActiveRecord::QueryMethods#distinct pass field names but they
are incorrect for the current version of Rails.
ActiveRecord::QueryMethods#group provides the expected behavior and
benefits performance. See commit 6da24aad4cafdef8d8a2c92bac2002a5fc2fe9c8.
The old implementation had two queries:
1. The query constructed in Api::V1::FavouritesController#results
2. The query constructed in #cached_favourites, which is merged with 1.
Both of them are issued againt PostgreSQL. The combination of the two
queries caused the following problems:
- The small window between the two queries involves race conditions.
- Minor performance inefficiency.
Moreover, the construction of query 2, which involves merging with query
1 has a bug. Query 1 is finalized with paginate_by_id, but paginate_by_id
returns an array when min_id parameter is specified. The behavior prevents
from merging the query, and in the real world, ActiveRecord simply ignores
the merge (!), which results in querying the entire scan of statuses and
favourites table.
This change fixes these issues by simply letting query 1 get all the works
done.
DISTINCT clause removes duplicated records according to all the selected
attributes. In reality, it can remove duplicated records only looking at
statuses.id, but the clause confuses the query planner and yields
insufficient performance.
The behavior is also problematic if the scope produced by HashQueryService
is used to query columns without id (using pluck method, for example). The
scope is expected to contain unique statuses, but the uniquness will be
evaluated with some arbitrary columns other than id.
GROUP BY clause resolves those problem by explicitly specifying the
column to take into account for the record distinction.
A workaround for the problem of DISTINCT clause in
Api::V1::Timelines::TagController is no longer necessary and removed.
* Add support for latest HTTP Signatures spec draft
https://www.ietf.org/id/draft-ietf-httpbis-message-signatures-00.html
- add support for the “hs2019” signature algorithm (assumed to be equivalent
to RSA-SHA256, since we do not have a mechanism to specify the algorithm
within the key metadata yet)
- add support for (created) and (expires) pseudo-headers and related
signature parameters, when using the hs2019 signature algorithm
- adjust default “headers” parameter while being backwards-compatible with
previous implementation
- change the acceptable time window logic from 12 hours surrounding the “date”
header to accepting signatures created up to 1 hour in the future and
expiring up to 1 hour in the past (but only allowing expiration dates up to
12 hours after the creation date)
This doesn't conform with the current draft, as it doesn't permit accounting
for clock skew.
This, however, should be addressed in a next version of the draft:
https://github.com/httpwg/http-extensions/pull/1235
* Add additional signature requirements
* Rewrite signature params parsing using Parslet
* Make apparent which signature algorithm Mastodon on verification failure
Mastodon uses RSASSA-PKCS1-v1_5, which is not recommended for new applications,
and new implementers may thus unknowingly use RSASSA-PSS.
* Add workaround for PeerTube's invalid signature header
The previous parser allowed incorrect Signature headers, such as
those produced by old versions of the `http-signature` node.js package,
and seemingly used by PeerTube.
This commit adds a workaround for that.
* Fix `signature_key_id` raising an exception
Previously, parsing failures would result in `signature_key_id` being nil,
but the parser changes made that result in an exception.
This commit changes the `signature_key_id` method to return `nil` in case
of parsing failures.
* Move extra HTTP signature helper methods to private methods
* Relax (request-target) requirement to (request-target) || digest
This lets requests from Plume work without lowering security significantly.