* Add UserNote model
* Add UI for user notes
* Put comment in relationships entity
* Add API to create user notes
* Copy user notes to new account when receiving a Move activity
* Address some of the review remarks
* Replace modal by inline edition
* Please CodeClimate
* Button design changes
* Change design again
* Cancel note edition when pressing Escape
* Fixes
* Tweak design again
* Move “Add note” item, and allow users to add notes to themselves
* Rename UserNote into AccountNote, rename “comment” Relationship attribute to “note”
- Change audio files to not be stripped of metadata
- Automatically extract cover art from audio if it exists
- Add `thumbnail` parameter to `POST /api/v1/media`, `POST /api/v2/media` and `PUT /api/v1/media/:id`
- Add `icon` to represent it in attachments in ActivityPub
- Fix `preview_url` containing URL of missing missing image when there is no thumbnail instead of null
- Fix duration of audio not being displayed on public pages until the file is loaded
- Fix audio attachments not being represented in OpenGraph tags
- Fix audio being represented as "1 image" in OpenGraph descriptions
- Fix video metadata being overwritten by paperclip-av-transcoder
- Fix embedded player not using Mastodon's UI
- Fix audio/video progress bars not moving smoothly
- Fix audio/video buffered bars not displaying correctly
* Split media cleanup from reject-media domain blocks to its own service
* Slightly improve ClearDomainMediaService error handling
* Lower DomainClearMediaWorker to lowest-priority queue
* Do not catch ActiveRecord::RecordNotFound in domain block workers
* Fix DomainBlockWorker spec labels
* Add some specs
* Change domain blocks to immediately mark accounts as suspended
Rather than doing so sequentially, account after account, while cleaning
their data. This doesn't change much about the time the block takes to
complete, but it immediately prevents interaction with the blocked domain,
while up to now, it would only be guaranteed when the process ends.
* Fix PostgreSQL load when linking in announcements
Fixes#13245 by caching status lookups
Since statuses are supposed to be known already and we only
need their URLs and a few other things, caching them should
be fine.
Since it's only used by announcements so far, there won't
be much statuses to cache.
* Perform status lookup when saving announcements, not when rendering them
* Change EntityCache#status to fetch URLs instead of looking into the database
* Move announcement link lookup to publishing worker
* Address issues pointed out during review
Also:
- Fix locks not being removed when jobs go to the dead job queue
- Add UI for managing locks to the Sidekiq dashboard
- Remove unused Sidekiq workers
Fix#13349
Change `all_day` to be a visual client-side cue only
Publish immediately if `scheduled_at` is in the past
Add `published_at` and `updated_at` to announcements JSON
* Add announcements
Fix#11006
* Add reactions to announcements
* Add admin UI for announcements
* Add unit tests
* Fix issues
- Add `with_dismissed` param to announcements API
- Fix end date not being formatted when time range is given
- Fix announcement delete causing reactions to send streaming updates
- Fix announcements container growing too wide and mascot too small
- Fix `all_day` being settable when no time range is given
- Change text "Update" to "Announcement"
* Fix scheduler unpublishing announcements before they are due
* Fix filter params not being passed to announcements filter
* Fix being able to follow oneself by moving to an account that was following the old one
* Add specs
* Add spec to catch MoveWorker issue with local followers following both accounts
* Fix move worker breaking when a local account follows both source and target accounts
* Fix migration from remote to local account not sending Undo Follow
* Fix show_reblogs not being preserved for moved account's followers
* Add soft delete for statuses to allow them to appear instant
* Allow reporting soft-deleted statuses and show them in the admin UI
* Change index for getting an account's statuses
The reason for unattaching media instead of removing it is to support
delete & redraft functionality, but remote or staff-removed statuses
will never be redrafted, so the media should be deleted immediately
* Add database columns for adding notes to domain blocks/restrctions
* Add admin UI to set private and public comments when blocking a domain
* Add text for private and public comments on domain blocks
* Show domain block comments in admin UI
* Add comments to the domain block undo page
* Make UnblockDomainService more robust regarding upgraded domain blocks
* Allow editing domain blocks
* Rename button from “undo domain block” to “view domain block” in account admin UI
* Change test to unsilence silenced users from upgraded blocks
* Add `tootctl preview_cards remove`
* fix code style
* Remove `Scheduler::PreviewCardsCleanupScheduler` file
* fix code style again
Add exclude case where image_file_name is blank
* Added a function to output confirmation if the specified number of days is less than 2 weeks
- Change the maximum count of retry for web push notification (Default -> 5).
- In case of high load of subscribe server, the retries will be repeated many times.
- Because the retries occupy the default queue, maximum retry count should be reduced.
* Remove Salmon and PubSubHubbub endpoints
* Add error when trying to follow OStatus accounts
* Fix new accounts not being created in ResolveAccountService
* Add request pool to improve delivery performance
Fix#7909
* Ensure connection is closed when exception interrupts execution
* Remove Timeout#timeout from socket connection
* Fix infinite retrial loop on HTTP::ConnectionError
* Close sockets on failure, reduce idle time to 90 seconds
* Add MAX_REQUEST_POOL_SIZE option to limit concurrent connections to the same server
* Use a shared pool size, 512 by default, to stay below open file limit
* Add some tests
* Add more tests
* Reduce MAX_IDLE_TIME from 90 to 30 seconds, reap every 30 seconds
* Use a shared pool that returns preferred connection but re-purposes other ones when needed
* Fix wrong connection being returned on subsequent calls within the same thread
* Reduce mutex calls on flushes from 2 to 1 and add test for reaping
HTTP 401 responses returned by Mastodon's inbox controller may
be temporary if, for instance, the requesting user's actor/key json
could not be retrieved in a timely fashion. This changes allow retries
instead of dropping the message entirely.
Also added HTTP 408 as that error is by nature temporary.
* Do not retry processing ActivityPub jobs raising validation errors
Jobs yielding validation errors most probably won't ever be accepted,
so it makes sense not to clutter the queues with retries.
* Lower RecordInvalid error reporting to debug log level
* Remove trailing whitespace
* Refactor imports
* Export show_reblogs when exporting list of followed users
* Add support for importing show_reblogs with following collection
* Fix tests
* Fix poll update handler calling method was that was not available
Fix regression from #10209
* Refactor VoteService
* Refactor ActivityPub::DistributePollUpdateWorker and optimize it
* Fix typo
* Fix typo
* Process incoming poll tallies update
* Send Update on poll vote
* Do not send Updates for a poll more often than once every 3 minutes
* Include voters in people to notify of results update
* Schedule closing poll worker on poll creation
* Add new notification type for ending polls
* Add front-end support for ended poll notifications
* Fix UpdatePollSerializer
* Fix Updates not being triggered by local votes
* Fix tests failure
* Fix web push notifications for closing polls
* Minor cleanup
* Notify voters of both remote and local polls when those close
* Fix delivery of poll updates to mentioned accounts and voters
* Fetch up to 5 replies when discovering a new remote status
This is used for resolving threads downwards. The originating
server must add a “replies” attributes with such replies for it to
be useful.
* Add some tests for ActivityPub::FetchRepliesWorker
* Add specs for ActivityPub::FetchRepliesService
* Serialize up to 5 public self-replies for ActivityPub notes
* Add specs for ActivityPub::NoteSerializer
* Move exponential backoff logic to a worker concern
* Fetch first page of paginated collections when fetching thread replies
* Add specs for paginated collections in replies
* Move Note replies serialization to a first CollectionPage
The collection isn't actually paginable yet as it has no id nor
a `next` field. This may come in another PR.
* Use pluck(:uri) instead of map(&:uri) to improve performances
* Fix fetching replies when they are in a CollectionPage
Reject those from accounts with no local followers, from relays
that are not enabled, which do not address local accounts and are
not replies to accounts that do have local followers
- Reduce time-to-digest from 20 to 7 days
- Fetch mentions starting from +1 day since last login
- Fix case when last login is more recent than last e-mail
- Do not render all mentions, only 40, but show number in subject
- Do not send digest to moved accounts
- Do send digest to silenced accounts
* Do not LDS-sign Follow, Accept, Reject, Undo, Block
* Do not use LDS for Create activities of private toots
* Minor cleanup
* Ignore unsigned activities instead of misattributing them
* Use status.distributable? instead of querying visibility directly
- Some associations were missing from the clean-up
- Some attributes were not reset on suspension
- Skip federation and streaming deletes when purging a dead domain
- Move account association definitions to concern
* Eliminate extra accounts select query from FollowService
* Optimistically update follow state in web UI and hide loading bar
Fix#6205
* Asynchronize NotifyService in FollowService
And fix failing test
* Skip Webfinger resolve routine when called from FollowService if possible
If an account is ActivityPub, then webfinger re-resolving is not necessary
when called from FollowService. Improve options of ResolveAccountService
* Add silent column to mentions
* Save silent mentions in ActivityPub Create handler and optimize it
Move networking calls out of the database transaction
* Add "limited" visibility level masked as "private" in the API
Unlike DMs, limited statuses are pushed into home feeds. The access
control rules between direct and limited statuses is almost the same,
except for counter and conversation logic
* Ensure silent column is non-null, add spec
* Ensure filters don't check silent mentions for blocks/mutes
As those are "this person is also allowed to see" rather than "this
person is involved", therefore does not warrant filtering
* Clean up code
* Use Status#active_mentions to limit returned mentions
* Fix code style issues
* Use Status#active_mentions in Notification
And remove stream_entry eager-loading from Notification
* Add conversations API
* Add web UI for conversations
* Add test for conversations API
* Add tests for ConversationAccount
* Improve web UI
* Rename ConversationAccount to AccountConversation
* Remove conversations on block and mute
* Change last_status_id to be a denormalization of status_ids
* Add optimistic locking
* Verify link ownership with rel="me"
* Add explanation about verification to UI
* Perform link verifications
* Add click-to-copy widget for verification HTML
* Redesign edit profile page
* Redesign forms
* Improve responsive design of settings pages
* Restore landing page sign-up form
* Fix typo
* Support <link> tags, add spec
* Fix links not being verified on first discovery and passive updates
* If an Update is signed with known key, skip re-following procedure
Because it means the remote actor did *not* lose their database
* Add CLI method for rotating keys
bin/tootctl accounts rotate [USERNAME]
Generates a new RSA key per account and sends out an Update activity
signed with the old key.
* Key rotation: Space out Update fan-outs every 5 minutes per 1000 accounts
* Skip suspended accounts in key rotation
* Fix uncaching worker
* Revert to using Paperclip's filesystem backend instead of fog-local
fog-local has lots of concurrency issues, causing failure to delete files,
dangling file records, and spurious errors UncacheMediaWorker
* Send rejections to followers when user hides domain they're on
* Use account domain blocks for "authorized followers" action
Replace soft-blocking (block & unblock) behaviour with follow rejection
* Split sync and async work of account domain blocking
Do not create domain block when removing followers by domain, that
is probably unexpected from the user's perspective.
* Adjust confirmation message for domain block
* yarn manage:translations
* Speed up some rake tasks by moving execution to Sidekiq
mastodon:media:remove_silenced
mastodon:media:remove_remote
mastodon:media:redownload_avatars
mastodon:feeds:build
* Fix code style issue
* Do not raise delivery failure on 4xx errors, increase stoplight threshold
Stoplight failure threshold from 3 to 10
Status code 429 will raise a failure/get retried
* Oops
- POST /api/v1/push/subscription
- PUT /api/v1/push/subscription
- DELETE /api/v1/push/subscription
- New OAuth scope: "push" (required for the above methods)
* Revert "Fixes/do not override timestamps (#7331)"
This reverts commit 581a5c9d29.
* Document Snowflake ID corner-case a bit more
Snowflake IDs are used for two purposes: making object identifiers harder to
guess and ensuring they are in chronological order. For this reason, they
are based on the `created_at` attribute of the object.
Unfortunately, inserting items with older snowflakes IDs will break the
assumption of consumers of the paging APIs that new items will always have
a greater identifier than the last seen one.
* Add `override_timestamps` virtual attribute to not correlate snowflake ID with created_at
* Do not override timestamps for incoming toots
* Remove every reference to override_timestamps
Statuses are now created with the announced publishing date
and are only pushed to timelines if that date is at most
6 hours earlier than the time at which it is processed.
* Revert "Weblate translations 20180503 (#7325)"
This reverts commit dfa6bccb64.
* Revert "Prevent timeline from moving when cursor is hovering over it (fixes#7278) (#7327)"
This reverts commit 58852695c8.
* Revert "Add pry-byebug (#7307)"
This reverts commit ab773e4d5f.
* Revert "Do not override timestamps for incoming toots (#7326)"
This reverts commit bd36791832.
Offload creation of local notifications to a worker. Remove two
redundant SQL queries from ProcessMentionsService, remove n+1
XML/JSON serialization via memoization
* No need to re-require sidekiq plugins, they are required via Gemfile
* Add derailed_benchmarks tool, no need to require TTY gems in Gemfile
* Replace ruby-oembed with FetchOEmbedService
Reduce startup by 45382 allocated objects
* Remove preloaded JSON-LD in favour of caching HTTP responses
Reduce boot RAM by about 6 MiB
* Fix tests
* Fix test suite by stubbing out JSON-LD contexts
* Adjust privacy policy to be more specific to Mastodon
Fix#6613
* Change data retention of IP addresses from 5 years to 1 year
* Add even more information
* Remove all (now invalid) translations of the privacy policy
* Add information about archive takeout, remove pointless consent section
* Emphasis on DM privacy
* Improve wording
* Add line about data use for moderation purposes
to_s method of HTTP::Response keeps blocking while it receives the whole
content, no matter how it is big. This means it may waste time to receive
unacceptably large files. It may also consume memory and disk in the
process. This solves the inefficency by checking response length while
receiving.
HTTP connections must be explicitly closed in many cases, and letting
perform method close connections makes its callers less redundant and
prevent them from forgetting to close connections.
* Fix#201: Account archive download
* Export actor and private key in the archive
* Optimize BackupService
- Add conversation to cached associations of status, because
somehow it was forgotten and is source of N+1 queries
- Explicitly call GC between batches of records being fetched
(Model class allocations are the worst offender)
- Stream media files into the tar in 1MB chunks
(Do not allocate media file (up to 8MB) as string into memory)
- Use #bytesize instead of #size to calculate file size for JSON
(Fix FileOverflow error)
- Segment media into subfolders by status ID because apparently
GIF-to-MP4 media are all named "media.mp4" for some reason
* Keep uniquely generated filename in Paperclip::GifTranscoder
* Ensure dumped files do not overwrite each other by maintaing directory partitions
* Give tar archives a good name
* Add scheduler to remove week-old backups
* Fix code style issue
Currently, Mastodon will retry delivering toots for a bit over 1 hour.
This is a very short timespan when considering private and direct toots, which
cannot be seen by the recipient at all after the delivery attempts have failed.
Ideally, private and direct toots should have a different number of retries,
but I do not know how to do that.
* Fix regeneration marker not being removed after completion
* Return HTTP 206 from /api/v1/timelines/home if regeneration in progress
Prioritize RegenerationWorker by putting it into default queue
* Display loading indicator and poll home timeline while it regenerates
* Add graphic to regeneration message
* Make "not found" indicator consistent with home regeneration
* Avoid sending explicit Undo->Announce when original deleted
* Do not forward a reply back to the server that sent it
* Deduplicate inboxes of rebloggers' followers for delete forwarding
* Adjust test
* Fix wrong class, bad SQL, wrong variable, outdated comment
* Add structure for lists
* Add list timeline streaming API
* Add list APIs, bind list-account relation to follow relation
* Add API for adding/removing accounts from lists
* Add pagination to lists API
* Add pagination to list accounts API
* Adjust scopes for new APIs
- Creating and modifying lists merely requires "write" scope
- Fetching information about lists merely requires "read" scope
* Add test for wrong user context on list timeline
* Clean up tests
Thread resolving is one of the few tasks that isn't retried on failure.
One common cause for failure of this task is a well-connected user replying to
a toot from a little-connected user on a small instance: the small instance
will get many requests at once, and will often fail to answer requests within
the 10 seconds timeout used by Mastodon.
This changes makes the ThreadResolveWorker retry a few times, with a
rapidly-increasing time before retries and large random contribution in order
to spread the load over time.
* Clean up reblog-tracking sets from FeedManager
Builds on #5419, with a few minor optimizations and cleanup of sets
after they are no longer needed.
* Update tests, fix multiply-reblogged case
Previously, we would have lost the fact that a given status was
reblogged if the displayed reblog of it was removed, now we don't.
Also added tests to make sure FeedManager#trim cleans up our reblog
tracking keys, fixed up FeedCleanupScheduler to use the right loop,
and fixed the test for it.
* Keep references to all reblogs of a status on home feed
When inserting reblog: Add to set of reblogs of this status on
the feed, if original status was present in the feed, add it to
that set as well.
When removing a reblog: Remove it from that set. Take random
remaining item from the set. If one exists, re-insert it into feed,
otherwise do not re-insert anything.
Fix#4210
* When original is removed, toss out reblog references
- Rename Mastodon::TimestampIds into Mastodon::Snowflake for clarity
- Skip for statuses coming from inbox, aka delivered in real-time
- Skip for statuses that claim to be from the future
* Improve error handling on LinkCrawlWorker
* Ignore TimeoutError and InvalidURIError too
* Record errors to debug log
* Enable dead job queue on LinkCrawlWorker
Since most of acceptable errors were already ignored, only our side issue should go to dead job queue.
* Ignore all http gem errors
* Revert "Enable UniqueRetryJobMiddleware even when called from sidekiq worker (#4836)"
This reverts commit 6859d4c028.
* Revert "Do not execute the job with the same arguments as the retry job (#4814)"
This reverts commit be7ffa2d75.
* Whenever a remote keypair changes, unfollow them and re-subscribe to them
In Mastodon (it could be different for other OStatus or AP-enabled software),
a keypair change is indicative of whole user (or instance) data loss. In this
situation, the “new” user might be different, and almost certainly has an empty
followers list. In this case, Mastodon instances will disagree on follower
lists, leading to unreliable delivery and “shadow followers”, that is users
believed by a remote instance to be followers, without the affected user
knowing.
Drawbacks of this change are:
1. If an user legitimately changes public key for some reason without losing
data (not possible in Mastodon at the moment), they will have their remote
followers unsubscribed/re-subscribed needlessly.
2. Depending of the number of remote followers, this may generate quite some
traffic.
3. If the user change is an attempt at usurpation, the remote followers will
unknowingly follow the usurper. Note that this is *not* a change of
behavior, Mastodon already behaves like that, although delivery might be
unreliable, and the usurper would not have known the former user's
followers.
* Rename ResubscribeWorker to RefollowWorker
* Process followers in batches
When a new user confirms their e-mail, bootstrap their home timeline
by automatically following a set of accounts. By default, all local
admin accounts (that are unlocked). Can be customized by new admin
setting (comma-separated usernames, local and unlocked only)
* Decouple Status#local? from uri being nil
* Replace on-the-fly URI generation with stored URIs
- Generate URI in after_save hook for local statuses
- Use static value in TagManager when available, fallback to tag format
- Make TagManager use ActivityPub::TagManager to understand new format
- Adjust tests
* Use other heuristic for locality of old statuses, do not perform long query
* Exclude tombstone stream entries from Atom feed
* Prevent nil statuses from landing in Pubsubhubbub::DistributionWorker
* Fix URI not being saved (#4818)
* Add more specs for Status
* Save generated uri immediately
and also fix method order to minimize diff.
* Fix alternate HTML URL in Atom
* Fix tests
* Remove not-null constraint from statuses migration to speed it up
Requires moving Atom rendering from DistributionWorker (where
`stream_entry.status` is already nil) to inline (where
`stream_entry.status.destroyed?` is true) and distributing that.
Unfortunately, such XML renderings can no longer be easily chained
together into one payload of n items.
* Add handling of Linked Data Signatures in payloads
* Add a way to sign JSON, fix canonicalization of signature options
* Fix signatureValue encoding, send out signed JSON when distributing
* Add missing security context
* ActivityPub migration procedure
Once one account is detected as going from OStatus to ActivityPub,
invalidate WebFinger cache for other accounts from the same domain
* Unsubscribe from PuSH updates once we receive an ActivityPub payload
* Re-subscribe to PuSH unless already unsubscribed, regardless of protocol
*Note: OStatus URIs are invalid for ActivityPub. But we have them for
as long as we want to keep old OStatus-sourced content and as long as
we remain OStatus-compatible.*
- In Announce handling, if object URI is not a URL, fallback to object URL
- Do not use specialized ThreadResolveWorker, rely on generalized handling
- When serializing notes, if parent's URI is not a URL, use parent's URL
* Add ActivityPub inbox
* Handle ActivityPub deletes
* Handle ActivityPub creates
* Handle ActivityPub announces
* Stubs for handling all activities that need to be handled
* Add ActivityPub actor resolving
* Handle conversation URI passing in ActivityPub
* Handle content language in ActivityPub
* Send accept header when fetching actor, handle JSON parse errors
* Test for ActivityPub::FetchRemoteAccountService
* Handle public key and icon/image when embedded/as array/as resolvable URI
* Implement ActivityPub::FetchRemoteStatusService
* Add stubs for more interactions
* Undo activities implemented
* Handle out of order activities
* Hook up ActivityPub to ResolveRemoteAccountService, handle
Update Account activities
* Add fragment IDs to all transient activity serializers
* Add tests and fixes
* Add stubs for missing tests
* Add more tests
* Add more tests
* Do not raise unretryable exceptions in ResolveRemoteAccountService
* Removed fatal exceptions from ResolveRemoteAccountService
Exceptions that cannot be retried should not be raised. New exception
class for those that can be retried (Mastodon::UnexpectedResponseError)
* Wrap methods of ProcessFeedService::ProcessEntry in classes
This is a change same with 425acecfdb, except
that it has the following changes:
* Revert irrelevant change in find_or_create_conversation
* Fix error handling for RemoteActivity
* Introduce Ostatus name space
* Add dependency on idn-ruby to speed up URI normalization
* Use normalized_host instead of normalize.host when applicable
When we are only interested in the normalized host, calling normalized_host
avoids normalizing the other components of the URI as well as creating a
new object
* Add Request class with HTTP signature generator
Spec: https://tools.ietf.org/html/draft-cavage-http-signatures-06
* Add HTTP signature verification concern
* Add test for SignatureVerification concern
* Add basic test for Request class
* Make PuSH subscribe/unsubscribe requests use new Request class
Accidentally fix lease_seconds not being set and sent properly, and
change the new minimum subscription duration to 1 day
* Make all PuSH workers use new Request class
* Make Salmon sender use new Request class
* Make FetchLinkService use new Request class
* Make FetchAtomService use the new Request class
* Make Remotable use the new Request class
* Make ResolveRemoteAccountService use the new Request class
* Add more tests
* Allow +-30 seconds window for signed request to remain valid
* Disable time window validation for signed requests, restore 7 days
as PuSH subscription duration (which was previous default due to a bug)
Since there is little point in retrying so often when a service is down
or does not exist anymore. Subscriptions are renewed 1 day before they
should expire, so retrying in 30 minutes, then 2 hours, then 12 hours
is fine. If even after that, the remote server does not work, there is
little sense in retrying more often than once a day
Also, uniqueness of the job should ensure that failed retries will
not result in multiple retries for the same endpoint when the next
resubscription cycle comes
* Add overview of active sessions
* Better display of browser/platform name
* Improve how browser information is stored and displayed for sessions overview
* Fix test
* Fix#2347 - Bind web UI access token to session
When you logout, session also destroys the access token, so it's no longer
valid. If access token is destroyed some other way, the session is also
destroyed, requiring a re-login.
Fix#1681 - Add scheduler to remove revoked access tokens and grants
* Fix test
* Introduce domains method to Account relation
Account had followers_domains method, which was excessively specific.
Let relation of Account have domains method instead.
* Move follow_mapping in Account to AccountInteractions
* Introduce shared examples for AccountAvatar inclusion
* Cover Account more
* Make Pubsubhubbub::DistributionWorker handle both single stream entry
arguments, as well as arrays of stream entries
* Add BatchedRemoveStatusService, make SuspendAccountService use it
* Improve method names
* Add test
* Add more tests
* Use PuSH payloads of 100 to have a clear mapping of
1000 input statuses -> 10 PuSH payloads
It was nice while it lasted
* Add form for account deletion
* If avatar or header are gone from source, remove them
* Add option to have SuspendAccountService remove user record, add tests
* Exclude suspended accounts from search
* Do not cancel PuSH subscriptions after encountering "permanent" error response
After talking with MMN about it, turns out some servers/php setups do
return 4xx errors while rebooting, so this anti-feature that was meant
to take load off of the hub is doing more harm than good in terms of
breaking subscriptions
* Update delivery_worker.rb
* Fix#2473 - Use sidekiq scheduler to refresh PuSH subscriptions instead of cron
Fix an issue where / in domain would raise exception in TagManager#normalize_domain
PuSH subscriptions refresh done in a round-robin way to avoid hammering a single
server's hub in sequence. Correct handling of failures/retries through Sidekiq (see
also #2613). Optimize Account#with_followers scope. Also, since subscriptions
are now delegated to Sidekiq jobs, an uncaught exception will not stop the entire
refreshing operation halfway through
Fix#2702 - Correct user agent header on outgoing http requests
* Add test for SubscribeService
* Extract #expiring_accounts into method
* Make mastodon:push:refresh no-op
* Queues are now defined in sidekiq.yml
* Queues are now in sidekiq.yml
* Fix#2119 - Whenever about to send a HTTP request, normalize the URI
* Add test for IDN request in FetchLinkCardService
* Perform IDN normalization on domains before they are stored in the DB
* Make private toots get PuSHed to subscription URLs that belong to domains where you have approved followers
* Authorized followers controller, stub for bulk action
* Soft block in the background
* Add simple test for new controller
* Rename Settings::FollowersController to Settings::FollowerDomainsController, paginate results,
rename "private" post setting to "followers-only", fix pagination style, improve post privacy
preferences style, improve warning style
* Extract compose form warnings into own container, show warning when posting to followers-only with unlocked account
* Fix#1141, fix#1126 - Work through UpdateRemoteProfileService for both <feed> and <entry> top-level tags
* Improve code quality, remove line unrelated to fix
* Rewrite Atom generation from stream entries to use Ox instead of Nokogiri::Builder
StreamEntry is now limited to only statuses, which allows some optimization. Removed
extra queries on AccountsController#show. AtomSerializer instead of AtomBuilderHelper
used in AccountsController#show, StreamEntriesController#show, StreamEntryRenderer
and PubSubHubbub::DistributionWorker
PubSubHubbub::DistributionWorker moves n+1 DomainBlock query to PubSubHubbub::DeliveryWorker
instead.
All Salmon slaps that aren't based on StreamEntry still use AtomBuilderHelper and Nokogiri
* All Salmon slaps now use Ox instead of Nokogiri. No touch from status on account