This application presents for users a way of uniquely annotating a locally stored collection of images.
A key feature is that emotions are stored by the user"s explicit input over a set of values for a set of different emotions. This is in contrast to many models where emoticons represent a small set of specific emotions a user must choose from rather than tune. Another key feature is the ability for any image to become a meme of another image. Collections of images with memes for a collection can also be made. The user has the ability to search through images and collections based upon the annotation information overlaps, and also find relevant memes through the use of a search on the bi-partite graph. That is "which images are memes to images with such criteria?". The current version 1.4.0 is now compatible with the webapp (www.tagyourplanet.com) where data can be exported and imported between those apps and from different platforms.
- Zip (unzip and run the executable file): download sha256 hash: e732bfb1977d60bf68aa0bfe9c422ae20a14ea992b56744216e14649cde77b71
- Linux Zip: download sha256 hash: f0ad3754e14128452f260fa6ae6677e4d2a687bf354b5220a58600c27cbad474
- The main menu lists the main actions to take and some scores to track the % of the images have been tagged or not.
- New images can be loaded into the app when you go into the tagging page. The uploaded images then live in the taga space so they persist when deleted from their original position.
- The tagging phase allows you to add text, emotions and memes (which are other images and any image can become a meme for another image).
- Collection can be produced but in this page new images cannot be imported; only during tagging. A collection creation wizard helps a user produce a collection.
- Searching allows you to find a rank of relevant images based upon the overlap of the annotations with the search criteria. The searching also finds images which are memes for the criteria you searched for. When searching for memes to add this works in the opposite direction. Eg. when searching for a meme you may ask which are the memes relevant to images with this criteria (success kid with happy celebrations in the meme bar and parties/weddings in the image bar).The opposite question can be asked when looking for the criteria of the success kid with the images associated with it.
- If an image is not in the main focus you can click on it to bring it to focus in a modal.
- To back-up your annotations you can export all the data which copies over all the files, database (sqlite3) and produces a text based set of files with the annotation data (for analytics). This can be re-imported later on or given to another taga user to "import". Upon imported if the same images are present a "merge" of the data is produced. Same for the collections where the gallery is appended.
- Updating if you delete the app and then re-install another version (or same version) your data should be there. But you can export and then re-import just to be certain that you don"t lose your annotation information.
- Version 1.1.0 onwards allows the user to apply ML that will auto populate the emotions based upon facial expressions and do facial recognition in the searches (from images and video).
- Version 1.2.0 onwards allows a user to import media of Video, Audio and PDF. Linux users can use a script in the folder which remounts a vFAT formated external media with exec permissions.
- Users can export their data and have it imported from another Tagasaurus desktop application or the webapp at tagyourplanet.com
This is developed using ElectronJS
It is hoped that the flat level GUI will be intuitive. The welcome screen presents the options to tag images, create entities and also export the data. Tagging individual images involves the user loading in images and providing manually insterted annotations. The entity creation process has a similar workflow with the ability to group together images as a collective entity under the new label that is user provided. Representative images are chosen for the collectives. A wizard is there to assist in the creation of the entities. The export facility produces a JSON with all the image file annotations and entity collection information, with a directory of all the image resources used.
Annotations of image taggings involve inputing a textual description from which "tags" are then produced, then emotional values along different dimensions are taken from the user and there are image links (also known as memes) which the user can choose from. For tagging images, the user is not required to insert any specific information, but for the entity creation there are requirements. When creating an entity via the wizard if there is missing information which is required a notification is presented with a message.
The purpose is to streamline the annotation process with the necessary emotional granulatiry required for training ML. This tool is expected to allow teams to produce datasets as training and validation datasets where the annotations are easily organized.
If you use this work in an academic publication, please use the references:
Mantzaris AV, Pandohie R, Hopwood M, Pho P, Ehling D.
"Tagasaurus, a tool to assist manual image tagging and the creation of image collections"
[doi link]( \url{https://www.sciencedirect.com/science/article/pii/S2665963821000658?via=ihub}) and the Elsevier webpage [link](https://www.sciencedirect.com/science/article/pii/S2665963821000658?via=ihub)
Mantzaris AV, Pandohie R, Hopwood M, Pho P, Ehling D, Walker TG.
"Introducing Tagasaurus, an Approach to Reduce Cognitive Fatigue from Long-Term Interface Usage When Storing Descriptions and Impressions from Photographs."
Technologies. 2021; 9(3):45. https://doi.org/10.3390/technologies9030045
Connect on the social
- https://twitter.com/Tagasaurus_app
- https://www.patreon.com/tagasaurus
- https://www.facebook.com/TagasaurusApp/