AI Image Tagging

Description

The AI Image Tagging service analyses images and tags them. The tagging can be done by one of the following provider:

Configuration

Workflow-Name: ai-imagetagging

Key (* = required) Type Default Description Example

ext.ai-imagetagging.tagTargetAttribute*

string

Target MAM attribute identifier where to write the detected tags to. The attribute needs to be of type "string"

ext.ai-imagetagging.google.apiKey*

string

API Key for the Google Vision service

ext.ai-imagetagging.clarifai.apiKey*

string

API Key for the Clarifai service

ext.ai-imagetagging.clarifai.workflowId*

string

ID of the clarifai workflow to use

ext.ai-imagetagging.ms.apiKey*

string

API Key for the Microsoft Vision service

ext.ai-imagetagging.pixyle.apiKey*

string

API Key for the Pixyle Tagging service

ext.ai-imagetagging.imagga.apiKey*

string

API Key for the Imagga Tagging service

ext.ai-imagetagging.ximilar.apiKey*

string

API Key for the Imagga Tagging service

ext.ai-imagetagging.ximilar.recognition.taskId*

string

ID of the Ximilar Task (trained model) to use

ext.ai-imagetagging.taggingService

string

google

Tagging service to use. Can be 'google', 'clarifai', 'ms', 'pixyle', 'imagga' or 'ximilar-recognition' whereby 'google' is default

clarifai

ext.ai-imagetagging.tagThreshold

double

0.90

Threshold for tag detection. Only a score higher than this value will be considered for tags

ext.ai-imagetagging.prettyPrint

boolean

false

If true, prints a pretty string like a HTML rendered table. Used for demo purposes

true

ext.ai-imagetagging.google.apiUrl

string

https://vision.googleapis.com/v1

Base URL of the Google Vision service

ext.ai-imagetagging.google.features

string - multiple

LABELS_DETECTION

List of google feature types which should be run on the selected asset. See here for full list: https://cloud.google.com/vision/docs/reference/rest/v1/Feature

FACE_DETECTION, LABEL_DETECTION, WEB_DETECTION

ext.ai-imagetagging.google.maxResults

long

15

Maximum number of results to return for each feature

ext.ai-imagetagging.google.mamAttributeMappings

string - multiple

See "Mappings" below for full mapping

list of json path to MAM attribute identifier mappings. Specify here to map which values from Google API result should be written to which MAM attribute identifier (comma separated if multiple values)

ext.ai-imagetagging.clarifai.apiUrl

string

https://api.clarifai.com/v2

Base URL of the Clarifai service

ext.ai-imagetagging.clarifai.modelName

string

general

Name of the clarifai model to use

InnoDay

ext.ai-imagetagging.clarifai.mamAttributeMappings

string - multiple

See "Mappings" below for full mapping

List of json path to MAM attribute identifier mappings. Specify here to map which values from Clarifai API result should be written to which MAM attribute identifier (comma separated if multiple values)

ext.ai-imagetagging.ms.apiUrl

string

https://westeurope.api.cognitive.microsoft.com/vision/v3.2/analyze

Base URL of the Microsoft Vision service

ext.ai-imagetagging.ms.visualFeatures

string

Tags

List of visual features for the Microsoft Vision service. See here for a full list: https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b

Tags, Faces

ext.ai-imagetagging.ms.details

string

List of details for the Microsoft Vision service. See here for a full list: https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b

Celebrities

ext.ai-imagetagging.ms.language

string

Language for the Microsoft Vision Service. Optional: If nothing is set MS uses 'en'. See here for a full list: https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b

en

ext.ai-imagetagging.ms.modelVersion

string

Model Version for the Microsoft Vision Service. Optional: If nothing is set MS uses 'latest'. See also here: https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b

latest

ext.ai-imagetagging.ms.mamAttributeMappings

string - multiple

See "Mappings" below for full mapping

List of json path to MAM attribute identifier mappings. Specify here to map which values from Clarifai API result should be written to which MAM attribute identifier (comma separated if multiple values)

ext.ai-imagetagging.pixyle.apiUrl

string

https://pva.pixyle.ai/v4

Base URL of the Pixyle Tagging service

ext.ai-imagetagging.pixyle.mamAttributeMappings

string

See "Mappings" below for full mapping

List of MAM attribute mappings. A default velocity template is used here which pretty prints if the 'prettyPrint' config value is set

ext.ai-imagetagging.imagga.apiUrl

string

https://api.imagga.com/v2

Base URL of the Imagga Tagging service

ext.ai-imagetagging.imagga.language

string

en

Language for the Imagga Tagging Service. Optional: If nothing is set we use 'en' by default. Use only a single value here (not comma-separated). See here for a full list: https://docs.imagga.com/#multi-language-support

de

ext.ai-imagetagging.imagga.limit

long

-1 (all tags)

Limits the number of tags in the result to the number you set

ext.ai-imagetagging.imagga.mamAttributeMappings

string

See "Mappings" below for full mapping

List of MAM attribute mappings. A default velocity template is used here which pretty prints if the 'prettyPrint' config value is set

ext.ai-imagetagging.ximilar.recognition.apiUrl

string

https://api.ximilar.com/recognition/v2

Base URL of the Ximilar Recognition Tagging service

ext.ai-imagetagging.ximilar.recognition.version*

long

takes latest version if not specified

Version of the task (defined in *.taskId) to use

ext.ai-imagetagging.ximilar.recognition.storeImages*

boolean

false

If true then the images are also stored into your workspace as training images

Mappings

A mapping is a String template of the following format

<expression> -> <mam-attribute-identifier>

Which means that the result of the expression will be written to the MAM Attribute with the given identifier.

The expression can be a simple static String:

testValue -> GOOGLE_TAGS

This would write the String "testValue" into the attribute GOOGLE_TAGS.

More important: The expression can contain a json path or a velocity template which selects or transforms the JSON content received from the AI provider.

Example

Google with feature "LABEL_DETECTION" enabled will return the following JSON (maxResults is 5 here for brevity):

{
  "labelAnnotations" : [ {
    "mid" : "/m/09j5n",
    "description" : "Footwear",
    "score" : 0.9843785,
    "topicality" : 0.9843785
  }, {
    "mid" : "/m/06rrc",
    "description" : "Shoe",
    "score" : 0.9569366,
    "topicality" : 0.9569366
  }, {
    "mid" : "/m/0hgrj75",
    "description" : "Outdoor shoe",
    "score" : 0.891524,
    "topicality" : 0.891524
  }, {
    "mid" : "/m/0hgs9bq",
    "description" : "Walking shoe",
    "score" : 0.864005,
    "topicality" : 0.864005
  }, {
    "mid" : "/m/09kjlm",
    "description" : "Sneakers",
    "score" : 0.8457816,
    "topicality" : 0.8457816
  } ]
}

The default mapping which is enabled for LABEL_DETECTION is the following:

jsonPath{$.labelAnnotations[?(@.score > ${ext.ai-imagetagging.tagThreshold})]['description']} : " + "jsonPath{$.labelAnnotations[?(@.score > ${ext.ai-imagetagging.tagThreshold})]['score'])} -> ${ext.ai-imagetagging.tagTargetAttribute}

After resolving the variable placeholder it looks like this (assuming we use the default threshold of 0.90 and GOOGLE_TAGS as target attribute identifier):

jsonPath{$.labelAnnotations[?(@.score > 0.90})]['description']} : jsonPath{$.labelAnnotations[?(@.score > 0.90)]['score'])} -> GOOGLE_TAGS

This means if the score is greater than 0.90 we write the string in the format "<description> : <score>" into GOOGLE_TAGS.

As there will be more than one match the strings will be separated by a NewLine character by default.

Because of the NewLine character a "Multiline label" should be used in FSDetails for this attribute.

For testing the expression it is recommended to use an online evaluator together with a sample JSON result. For example: https://jsonpath.com/

Supported Expressions

Currently we support the following expressions:

Expression Description

jsonPath{…​}

A jsonPath expression is expected between the curly brackets. We use this JsonPath implementation: https://github.com/json-path/JsonPath

velocity{…​}

An inline Apache Velocity template is expected between the curly braces

velocityFile{…​}

The parameter key of the velocity template file is expected between the curly braces. For example: "velocityFile{${myTemplate}}" whereby "myTemplate" is another config param of type FILE

Default Mappings

Google
jsonPath{$.labelAnnotations[?(@.score > ${ext.ai-imagetagging.tagThreshold})]['description']} : jsonPath{$.labelAnnotations[?(@.score > ${ext.ai-imagetagging.tagThreshold})]['score'])} -> ${ext.ai-imagetagging.tagTargetAttribute}
Clarifai
jsonPath{$.outputs[?(@.model.name == '${ext.ai-imagetagging.clarifai.modelName}')].data.concepts[?(@.value > ${ext.ai-imagetagging.tagThreshold})]['name']} : "jsonPath{$.outputs[?(@.model.name == '${ext.ai-imagetagging.clarifai.modelName}')].data.concepts[?(@.value > ${ext.ai-imagetagging.tagThreshold})]['value'])} -> ${ext.ai-imagetagging.tagTargetAttribute}
MS
jsonPath{$.tags[?(@.confidence > ${ext.ai-imagetagging.tagThreshold})]['name']} : jsonPath{$.tags[?(@.confidence > ${ext.ai-imagetagging.tagThreshold})]['confidence'])} -> ${ext.ai-imagetagging.tagTargetAttribute}
Pixyle
velocity{<default-pixyle-volicity-template>} -> ${ext.ai-imagetagging.tagTargetAttribute}

The default velocity template for pixyle is checked-in at the wfl extensions git repo.

Imagga
jsonPath{$..[?(@.confidence > ${ext.ai-imagetagging.tagThreshold})]['tag']['${ext.ai-imagetagging.imagga.language}']} : jsonPath{$..[?(@.confidence > ${ext.ai-imagetagging.tagThreshold})]['confidence']} -> ${ext.ai-imagetagging.tagTargetAttribute}

Welcome to the AI Chat!

Write a prompt to get started...