QoS
Long Computation and Large Storage
Long Computation
All non-cron computation nodes will timeout after 30 seconds. If you want to make it into a long computation node you have to request it through qos
by setting the long
property to true. This will give the node up to 30 minutes of computation.
nodes = [{
"#": "node1",
"implementation": {
"javascript": "server/node1_impl.js",
"qos":{
"long":true
}
}
}]
Limits for regular nodes
- 30 seconds of execution (default)
- 30 minutes of execution with long flag
Limits for cron nodes
Cron nodes are by default long computation nodes.
Premium functionality
It is intended that long computation nodes will only be available to premium users.
Large Storage
A node can request to read/write to the filesystem by specifying a large-storage
attribute in the qos
section of a node's implementation
.
It takes in an array containing elements in any of these two formats:
- A string of the form
bucket:permission
wherebucket
is the name of the storage and permission is eitherro
(read-only) orrw
(read-write). Omitting thepermission
parameter is the same as specifying read only permissions. - A dictionary of the form
{βbucketβ: βnameβ, βwriteβ: true/false}
. Omitting thewrite
parameter defaults tofalse
.
"nodes": [{
"#": "node1",
"implementation": {
"javascript": "server/node1_impl.js",
"when": {
"interval": 1000
},
"qos":{
"large-storage": [
"bucket0:ro",
{
"bucket": "bucket1",
"write": true
},
{
"bucket": "bucket2"
},
"attachments"
]
}
}
}]
-
Attachments: the relevant data will automatically populate the
attachments
field in the relevant JMAP representation of an email -
General large storage buckets: to access the data stored on the filesystem you can fetch the path of the relevant directory from the
_LARGE_STORAGE_<bucket-name>
environment variable in a node. e.g. for "bucket2" it will be under_LARGE_STORAGE_bucket2
Write permissions
Note that only one node can request a large-storage bucket with write permissions. However any number of nodes can request the bucket as read-only. This is to avoid concurrency issues.
Naming of large-storage buckets
Large-storage bucket names should not clash with the store and export buckets defined in the DAG.
Attachments
"attachments" is the default name for the email attachments storage. If you rename it in the port definition you should use the same name to get access to it. You can only request
ro
permissions to attachments.
SDK
When using the SDK large storage is available in the folder
sdk_tmp/large-storage/[email protected]/sift
. You can inspect this folder to debug large-storage files written from your node.
Premium functionality
It is intended that large storage access to nodes will only be available to premium users.
Updated about 7 years ago