* [chore] set max open / idle conns + conn max lifetime for both postgres and sqlite
* reduce cache size default to 8MiB, reduce connections to 2 * cpu
* introduce max open conns multiplier, tune sqlite and pg separately
* go fmt
* start adding report client api
* route + test reports get
* start report create endpoint
* you can create reports now babyy
* stub account report processor
* add single reportGet endpoint
* fix test
* add more filtering params to /api/v1/reports GET
* update swagger
* use marshalIndent in tests
* add + test missing Link info
When GTS is running in a container runtime which has configured CPU or
memory limits or under an init system that uses cgroups to impose CPU
and memory limits the values the Go runtime sees for GOMAXPROCS and
GOMEMLIMIT are still based on the host resources, not the cgroup.
At least for the throttling middlewares which use GOMAXPROCS to
configure their queue size, this can result in GTS running with values
too big compared to the resources that will actuall be available to it.
This introduces 2 dependencies which can pick up resource contraints
from the current cgroup and tune the Go runtime accordingly. This should
result in the different queues being appropriately sized and in general
more predictable performance. These dependencies are a no-op on
non-Linux systems or if running in a cgroup that doesn't set a limit on
CPU or memory.
The automatic tuning of GOMEMLIMIT can be disabled by either explicitly
setting GOMEMLIMIT yourself or by setting AUTOMEMLIMIT=off. The
automatic tuning of GOMAXPROCS can similarly be counteracted by setting
GOMAXPROCS yourself.
* don't serve unused fields for video attachments
* parse video bitrate + duration more accurately
* use ServeContent where appropriate to respect Range
* abstract temp file seeker into its own function
* media processor consolidation and reformatting, reduce amount of required syscalls
Signed-off-by: kim <grufwub@gmail.com>
* update go-store library, stream jpeg/png encoding + use buffer pools, improved media processing AlreadyExists error handling
Signed-off-by: kim <grufwub@gmail.com>
* fix duration not being set, fix mp4 test expecting error
Signed-off-by: kim <grufwub@gmail.com>
* fix test expecting media files with different extension
Signed-off-by: kim <grufwub@gmail.com>
* remove unused code
Signed-off-by: kim <grufwub@gmail.com>
* fix expected storage paths in tests, update expected test thumbnails
Signed-off-by: kim <grufwub@gmail.com>
* remove dead code
Signed-off-by: kim <grufwub@gmail.com>
* fix cached presigned s3 url fetching
Signed-off-by: kim <grufwub@gmail.com>
* fix tests
Signed-off-by: kim <grufwub@gmail.com>
* fix test models
Signed-off-by: kim <grufwub@gmail.com>
* update media processing to use sync.Once{} for concurrency protection
Signed-off-by: kim <grufwub@gmail.com>
* shutup linter
Signed-off-by: kim <grufwub@gmail.com>
* fix passing in KVStore GetStream() as stream to PutStream()
Signed-off-by: kim <grufwub@gmail.com>
* fix unlocks of storage keys
Signed-off-by: kim <grufwub@gmail.com>
* whoops, return the error...
Signed-off-by: kim <grufwub@gmail.com>
* pour one out for tobi's code <3
Signed-off-by: kim <grufwub@gmail.com>
* add back the byte slurping code
Signed-off-by: kim <grufwub@gmail.com>
* check for both ErrUnexpectedEOF and EOF
Signed-off-by: kim <grufwub@gmail.com>
* add back links to file format header information
Signed-off-by: kim <grufwub@gmail.com>
Signed-off-by: kim <grufwub@gmail.com>
* Add local user and post count to nodeinfo responses
This fixes#1307 (at least partially). The nodeinfo endpoint should now
return the total users on an instance, along with their post count.
* Update NodeInfoUsers docstring and swagger yaml file
* launch websocket streaming in goroutine to allow upgrade handler to return
* don't send any message on ping, improved close check on failed read
* use context to signal wsconn close, ensure canceled in read goroutine
Signed-off-by: kim <grufwub@gmail.com>
* AWS S3 config details added
It was interesting to note that since presigned urls are used buckets dont need to be exposed publically. this was an interesting change compared to other mastodon specific s3 bucket guides hence documented here for correct directions.
* Update storage.md
1. Added AWS identified to make it clear its aws specific.
2. Adjusted text around data migration
* updation as requested
Refining the doc as per request.
* [feature] Add throttling middleware to AP endpoints
* refactor a lil bit
* use config setting, start updating docs
* doc updates
* use relative links in faq doc
* small docs fixes
* return code 503 instead of 429 when throttled
* throttle other endpoints too
* simplify token channel prefills