Support for S3-compatible backend for storing UnityBase files (BLOB stores)
Existing storages can be switched to S3. Files added before the migration will remain available from the file system.
Configuring
- add package to application
npm i @unitybase/s3-blob-store
- add
@unitybase/s3-blob-store
as an application model intoubConfig.json
{
"application": {
"domain": {
"models": [
// ...
//#ifdef(%UB_USE_S3||false%=true)
{
"path": "./node_modules/@unitybase/s3-blob-store"
},
//#endif
]
}
}
}
- for BLOB stores, what should be stored in S3 add "implementedBy"
{
"application": {
"blobStores": [
{
"name": "yourStore",
//#ifdef(%UB_USE_S3||false%=true)
"implementedBy": "@unitybase/s3-blob-store",
"s3enabled": true
//#endif
}
]
}
}
- define environment variables
s3-blob-store
model adds partial config with following envs:
Variable name | Default | Description |
---|---|---|
UB_USE_S3 | false | enable S3 |
UB_S3_URL | http://127.0.0.1:9000 | S3 server URL (example: https://s3.us-east-1.amazonaws.com) |
UB_S3_REGION | us-east-1 | S3 bucket region. For MinIO keep a default |
UB_S3_KEY | S3 accessKeyId | |
UB_S3_SECRET | S3 secretAccessKey | |
UB_S3_DEFAULT_BUCKET | ubbs | Default Bucket name where new documents will be written |
UB_S3_ANONYMOUS_READ | true | Enable read requests to use nginx. See Buckets policy configuration |
UB_S3_UA_SECRET | CHANGE_ME_TO_BE_SECURE | Secret User-Agent, See Buckets policy configuration |
Buckets policy configuration
On production environment we strongly recommend to enable read (getDocument) requests to be proxied throws nginx. To do such:
- set env variables
UB_S3_ANONYMOUS_READ=true
andUB_S3_UA_SECRET=secureRandomString
. InsteadsecureRandomString
set actual RANDOM string, can be generated usingopenssl rand -base64 18
ortr -dc A-Za-z0-9 </dev/urandom | head -c 18; echo
bash command - call
ubcli generateNginxCfg
- this adds internal locations3
into nginx config - for each bucket, what used by UnityBase configure bucket policy, what allows anonymous reading from specified
IP addresses (replace value of
aws:SourceIp
by your nginx address) and for custom User-Agent (replace valueaws:UserAgent
below by UB_S3_UA_SECRET anv variable value)
- manually create a bucket with name defined in
UB_S3_DEFAULT_BUCKET
variable and set a policy below (with replacements ofSourceIp
andUserAgent
) for it (and all buckets what can be used additionally)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"*"
]
},
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::ubbs/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"127.0.0.1/32"
]
},
"StringLike": {
"aws:UserAgent": [
"CHANGE_ME_TO_BE_SECURE"
]
}
}
}
]
}
To define a policy for MinIO: in the user interface, go to Admin->Buckets->yourBucket and click on the icon next to the
text Access Policy:
in the Summary
panel. Select "custom" and paste policy JSON
for bucket policies rules see https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html
- now all new items will be stored in the S3 storage, old items remains in old location
Implementation details
For objects, stored in S3 the BlobStoreItem (JSON stored in the database) contains path what starts with s3://
(and followed by bucket/path/to/object
) in the relPath
attribute.
Old files, stored in the file system before switching to S3
(with relPath
what not starts with s3://
) will be accessible for read from old location.
Until database commit files are stored in BLOB store temp folder, just before commit they moved into S3 ( see TubDataStore.commitBLOBStore
),
therefore in case of several UnityBase instances either ip_hash strategy must be used for load balancing,
or store temp folder must be shared between instances
By default, all BLOB stores save files to the same UB_S3_DEFAULT_BUCKET
bucket. Application can define own logic for
bucket by subscribing to App
getBucketName
event and mutate a bucketCfg object. Example:
/**
* @param {object} item
* @param {UBEntityAttribute} item.attribute
* @param {BlobStoreItem} item.dirtyItem
* @param {object} bucketCfg
* @param {string} bucketCfg.name
*/
function calcBlobStoreName ({ attribute, blobStoreItem }, bucketCfg) {
if (attribure.entity.name.startsWith('arc_')) {
bucketCfg.name = 'archive'
}
}
App.on('getBucketName', calcBlobStoreName)
Testing
For testing purpose we recommend to use a MinIO