Stores provide the server-side in-DAG method for nodes to source and sink data from and to. To transport data from the DAG to the Red Sift clients have a look at exports.
In addition to data persistence, stores provide the ability to perform key space operations and aggregations to a subset of their data before importing or outputting data. That is why we provide a key schema in their definition.

Stores also provide generation counts and/or discard logic based on a time-to-live, (ttl).

Nodes can read data from stores to use it for their computations via their input property. A node can be output to a store via the output property.

"nodes": [ 
    "implementation" : '...',
    "input": '...',
    "exports": {
      "right" : {
        "ttl": "100"
"stores": {
  "left": {
    "key$schema": "string/string/string",
    "createOnly": true
  "right": {
    "key$schema": "string/string"


ttl, createOnly and exports

ttl and createOnly are only supported by stores, exports cannot use this configuration.


ttl (in seconds) is the time to store the value for before expiring it. Configuration may be specified in the store and/or overridden at each output.


ttl expiry

When the ttl expires, the data element is completely removed so metadata such as generation is reset. This is the only way to truly delete data via normal operation on the store.


createOnly specifies that entries should only be created if no entry existed before. This has the effect of discarding the write if a value already exists and preventing a cascade of the graph.



A combination of ttl and createOnly can turn a store into a cache for expensive lookup operations.

Namespace for stores

Inside Dagger, data in stores are persisted under a namespace. The namespace is based on the Account ID and the Sift GUID. Keys inside your stores under that namespace will be updated between Sift updates. A namespace is cleared when a user deletes a Sift from his profile or the major version of the Sift changes.