How can I use AI search?
AI search is located in the top, file-level search bar, just flip the switch in the search field and type in any word or phrase that describes the visual components of the image. You no longer need to know or understand the file taxonomy to find images - simply search for whatever you like! Examples of search terms include:
Glass building by water at sunset
Stairs (or stairway, or staircase…)
Bridge over land
High rise at night
Team in the office
Lobby with plants
How do you recommend using AI Visual Search?
OpenAsset’s AI Visual Search is built on an AI language model that allows you to search your OpenAsset instance for files that match the description you feed into search. Candidly, the best way to understand what will be returned is to try it out. Attempt different searches in OpenAsset and if the results aren’t quite what you expect, then adjust the language you’re using and try it again.
If you see an image (or two) in the results that do match specifically what you’re looking for then you can use the AI Image Similarity feature to see more images that contain the image components that are recognized in that file
Can I search for both visual elements and Project information?
Yes! You will need to layer these searches though, as the AI will only know what is visually in an image. Perform your visual AI search, then add further searches on top. For instance, search “lobby with plants” as your AI search, then layer keywords “Chicago” and “healthcare” alongside the field “client X” to find exactly what you’re looking for.
Can I see or manage the tags that the AI is applying behind the scenes?
No. The AI is not actually applying tags, but is using vectors to index them. This process is similar to what Google Images does, and like Google Image search, tags are not available. We don’t even have them readily available as OpenAsset! It’s all handled by the AI.
Is the feature secure?
Yes, the AI model used is hosted within OpenAsset and within the AWS boundary. The data (vectors) that the model generates are stored within your OpenAsset data. This is your data. It is not shared with anyone and is not used to train the AI.
I’m a new client without a file-level taxonomy OR I’m an existing client with a keyword taxonomy and I’m curious about whether or not I should create a new one or continue to maintain our existing one?
As a starting point - no.
While our Data Migration service does bring in file level keywords from the folder structure that existed prior to OpenAsset - we recommend that the first step of enabling your users to search for content within OpenAsset should start with AI Visual Search. AI Visual Search is built into the product and enables users to search without pre-existing knowledge of a customized taxonomy.
We do recommend that as Admins or as Core Users of OpenAsset that you perform AI searches for some of the things that you would expect to be regular searches performed by your team in order to understand the type of results that are returned. As you identify some key searchable terms, we recommend noting those and sharing them internally to give other users a sense of the best approach to searching.
You may still want to create a file-level taxonomy that represents some visual elements within images that the AI is not specific enough for. Examples may include very specific architecture terms or the names of construction equipment. Before deciding to create a file based taxonomy, experiment with the visual search to see if it’s giving the results that you need.
I’m an existing client with a large file-level taxonomy. What are best practices to make my taxonomy work well with AI?
You’ll still have the option to search using only your taxonomy.
We don’t recommend that you need to alter your taxonomy - in fact, if users that are in OpenAsset regularly have grown accustomed to those search terms, taking them away would not be good.
However, we do want to note that terms like “interior/exterior”, “photo/rendering”, “headshot/group shot”, etc. that used to be managed as keywords and results would be returned by keyword searches for images that have those keywords applied - it is possible that you would receive more results in your search when using AI Visual search. We recommend searching both ways and striking the balance in your workflows and advising your more casual users based on what you find.
Do I still need a Project taxonomy and Project fields?
Absolutely. The AI won’t know the location, year completed, client name, sector, services completed, or any other Project field or keyword. It can only see what is visually represented in an image. Your Project taxonomy is still key providing AEC specific search.
Do I need to tag files at upload?
Not unless you want to. Remember that all files within a Project are tagged with those Project keywords automatically. You may want to add file-level keywords that the AI is too general to tag, for instance, specific architectural terms. But most keywords that your users are searching will be applied by the AI
Can I turn off AI search?
AI visual search is on for everyone by default but it can be controlled by a general permission at company or group level.
Is the AI being trained through our use of it? Or is it learning from my images?
The AI that powers Visual Search is a pre-trained model. It is not learning from client data or trained on client data in ANY way.
It would be helpful to understand the nature of the question though. In general we understand this type of question comes up in regards to concerns around security or when end users aren’t finding what they are looking for. Is one of these more relevant for your inquiry?
From a security standpoint : the AI infrastructure is hosted within OpenAsset and within the AWS boundary. The data (vectors) that the model generates are stored within your OpenAsset data. This is your data. It is not shared with anyone and is not used to train the AI.
Regarding the ability of the AI to learn and improve : AI Visual Search is built on a model that includes 400 million images, therefore the ability to provide search results is very robust and will return results based on AI’s understanding of natural language and how search terms overlap across similar terms as they would in regular conversation. If you are concerned that the results being returned don’t match what you would expect - then the recommendation is to continue tweaking the language you are searching with. Adding more specific phrasing or more generic terms to the search is a good starting point.
If you try multiple searches and the results being returned do not match what you’re looking for, we are interested in better understanding what search terms you’re feeding the search and where the gaps are. Our team is interested in speaking with clients in these cases to better understand those gaps.
Does the AI search work off of my taxonomy or keywords?
No, the AI search does not work off your own taxonomy. This means that many, many more search terms are automatically applied to your images, without you needing to manage them in your taxonomy or tagging them yourself.
There are actually no keywords - truly. Clients have asked to see the ‘behind the scenes’ keyword list; this is not possible
Good talking point = Google image search doesn’t operate on keywords either.
How can we know what will be searchable by the AI?
The model has been trained on 400 million images, so it introduces a broad scope of searchability to your OpenAsset system that goes well beyond what you may have thought to create as manually managed keywords.
AEC relevant terms are natural language terms - and we recommend searching within your OpenAsset instance to vet the quality of the results being returned.
As you vet the results, make note of what is working well and what isn’t and share that with your team as guidance.
I’m hesitant to launch OA to a broader group of users because of the lack of search-ability since we haven’t cleaned up or applied file level keywords yet.
We understand the hesitancy if you feel like there is still work to be done with keywording files to ensure that team members will be able to find what they're looking for within OpenAsset.
To that end, as you look to ensure your OpenAsset content is searchable - we are excited to share our AI Visual Search feature with you which enables users to search OpenAsset with natural language searches to surface images that meet a range of criteria. From aesthetic descriptions like "construction site at sunset" to more technical terms such as "girder", "cantilever", "precast concrete" and more - AI Visual search will help all users to narrow down the images in your OpenAsset library to the ones most relevant for their use case. Going from thousands or tens of thousands of images to hundreds that you can quickly scroll through and find what you need.
EPS files are clogging the results. Why is this and what can we do about it?
In most cases, the PNG/EPS files contain transparency, which makes it difficult for the AI to analyze and categorize what the content of the file. For example, if searching for “open space”, since the the AI is interpreting the transparency as “open space” it may include more PNG/EPS files that have other components when used within other documents/interfaces. If you were to search for “open space outside buildings” the AI will return better results. Adding more context for what you are looking for will lead to better results.