When GTS is running in a container runtime which has configured CPU or
memory limits or under an init system that uses cgroups to impose CPU
and memory limits the values the Go runtime sees for GOMAXPROCS and
GOMEMLIMIT are still based on the host resources, not the cgroup.
At least for the throttling middlewares which use GOMAXPROCS to
configure their queue size, this can result in GTS running with values
too big compared to the resources that will actuall be available to it.
This introduces 2 dependencies which can pick up resource contraints
from the current cgroup and tune the Go runtime accordingly. This should
result in the different queues being appropriately sized and in general
more predictable performance. These dependencies are a no-op on
non-Linux systems or if running in a cgroup that doesn't set a limit on
CPU or memory.
The automatic tuning of GOMEMLIMIT can be disabled by either explicitly
setting GOMEMLIMIT yourself or by setting AUTOMEMLIMIT=off. The
automatic tuning of GOMAXPROCS can similarly be counteracted by setting
GOMAXPROCS yourself.
* don't serve unused fields for video attachments
* parse video bitrate + duration more accurately
* use ServeContent where appropriate to respect Range
* abstract temp file seeker into its own function
* media processor consolidation and reformatting, reduce amount of required syscalls
Signed-off-by: kim <grufwub@gmail.com>
* update go-store library, stream jpeg/png encoding + use buffer pools, improved media processing AlreadyExists error handling
Signed-off-by: kim <grufwub@gmail.com>
* fix duration not being set, fix mp4 test expecting error
Signed-off-by: kim <grufwub@gmail.com>
* fix test expecting media files with different extension
Signed-off-by: kim <grufwub@gmail.com>
* remove unused code
Signed-off-by: kim <grufwub@gmail.com>
* fix expected storage paths in tests, update expected test thumbnails
Signed-off-by: kim <grufwub@gmail.com>
* remove dead code
Signed-off-by: kim <grufwub@gmail.com>
* fix cached presigned s3 url fetching
Signed-off-by: kim <grufwub@gmail.com>
* fix tests
Signed-off-by: kim <grufwub@gmail.com>
* fix test models
Signed-off-by: kim <grufwub@gmail.com>
* update media processing to use sync.Once{} for concurrency protection
Signed-off-by: kim <grufwub@gmail.com>
* shutup linter
Signed-off-by: kim <grufwub@gmail.com>
* fix passing in KVStore GetStream() as stream to PutStream()
Signed-off-by: kim <grufwub@gmail.com>
* fix unlocks of storage keys
Signed-off-by: kim <grufwub@gmail.com>
* whoops, return the error...
Signed-off-by: kim <grufwub@gmail.com>
* pour one out for tobi's code <3
Signed-off-by: kim <grufwub@gmail.com>
* add back the byte slurping code
Signed-off-by: kim <grufwub@gmail.com>
* check for both ErrUnexpectedEOF and EOF
Signed-off-by: kim <grufwub@gmail.com>
* add back links to file format header information
Signed-off-by: kim <grufwub@gmail.com>
Signed-off-by: kim <grufwub@gmail.com>
* Add local user and post count to nodeinfo responses
This fixes#1307 (at least partially). The nodeinfo endpoint should now
return the total users on an instance, along with their post count.
* Update NodeInfoUsers docstring and swagger yaml file
* launch websocket streaming in goroutine to allow upgrade handler to return
* don't send any message on ping, improved close check on failed read
* use context to signal wsconn close, ensure canceled in read goroutine
Signed-off-by: kim <grufwub@gmail.com>
* AWS S3 config details added
It was interesting to note that since presigned urls are used buckets dont need to be exposed publically. this was an interesting change compared to other mastodon specific s3 bucket guides hence documented here for correct directions.
* Update storage.md
1. Added AWS identified to make it clear its aws specific.
2. Adjusted text around data migration
* updation as requested
Refining the doc as per request.
* [feature] Add throttling middleware to AP endpoints
* refactor a lil bit
* use config setting, start updating docs
* doc updates
* use relative links in faq doc
* small docs fixes
* return code 503 instead of 429 when throttled
* throttle other endpoints too
* simplify token channel prefills
* interim commit: start refactoring middlewares into package under router
* another interim commit, this is becoming a big job
* another fucking massive interim commit
* refactor bookmarks to new style
* ambassador, wiz zeze commits you are spoiling uz
* she compiles, we're getting there
* we're just normal men; we're just innocent men
* apiutil
* whoopsie
* i'm glad noone reads commit msgs haha :blob_sweat:
* use that weirdo go-bytesize library for maxMultipartMemory
* fix media module paths
* start messing about with different mp4 metadata extraction
* heyyooo it works
* add test cow
* move useful multierror to gtserror package
* error out if video doesn't seem to be a real mp4
* test parsing mkv in disguise as mp4
* tidy up error handling
* remove extraneous line
* update framerate formatting
* use float32 for aspect
* fixy mctesterson
* close pipereader on failed data function
* gently slurp the bytes
* readability updates
* go fmt
* tidy up file server tests + add more cases
* start moving io wrappers to separate iotools package. Remove use of buffering while piping recache stream
Signed-off-by: kim <grufwub@gmail.com>
* add license text
Signed-off-by: kim <grufwub@gmail.com>
Co-authored-by: kim <grufwub@gmail.com>
Lots of these were appearing:
```
*459 connect() failed (111: Connection refused) while connecting to upstream
```
This change resolves it, see https://stackoverflow.com/a/52550758
* for domain block lookups, lookup along subdomain parts
Signed-off-by: kim <grufwub@gmail.com>
* only lookup up to a max of 5 domain parts to prevent DOS, limit inserted domains to max of 5 subdomains
Signed-off-by: kim <grufwub@gmail.com>
* add test for domain block wildcarding
Signed-off-by: kim <grufwub@gmail.com>
* check cached status first, increase cached domain time
Signed-off-by: kim <grufwub@gmail.com>
* fix domain wildcard part building logic
Signed-off-by: kim <grufwub@gmail.com>
* create separate domain.BlockCache{} type to hold all domain blocks in memory
Signed-off-by: kim <grufwub@gmail.com>
* remove unused variable
Signed-off-by: kim <grufwub@gmail.com>
* add docs and test to domain block cache, check for domain == host in domain block getter funcs
Signed-off-by: kim <grufwub@gmail.com>
* add license text
Signed-off-by: kim <grufwub@gmail.com>
* check order in which we check primary cache
Signed-off-by: kim <grufwub@gmail.com>
* add better documentation of how domain block checking is performed
Signed-off-by: kim <grufwub@gmail.com>
* change
Signed-off-by: kim <grufwub@gmail.com>
Signed-off-by: kim <grufwub@gmail.com>