We’ve all experienced this situation: We have a certain image in our mind, but can only vaguely recall what we’ve seen, struggle to describe it in our own words, and don’t know its name. In theory, we would open a search engine in such a situation and try to piece together individual keywords to get the most accurate search results possible. This takes time and is likely to lead to frustration, as the results often don’t meet our expectations—despite long trial and error.
With our invention, VisualSearch, we aimed to showcase what visual search could look like in the future. By introducing a new type of interaction and utilizing artificial intelligence in the background, our goal was to provide users with more accurate search results and the lowest possible error rate. Through extensive user testing, we developed a concept that appeals to a broad target audience.
Unlike conventional search engines that heavily rely on the accuracy, completeness, and consistency of IDs, metadata, and descriptive tags, VisualSearch's advanced artificial intelligence doesn’t depend on machine learning and algorithms like traditional search engines. To achieve precise search results, the user receives real-time feedback from the AI while entering their input. This feedback comes in the form of suggestions, allowing the AI to better understand the user’s query. By adding the missing information, VisualSearch can deliver much faster and more accurate results compared to conventional search engines.
VisualSearch is an independent search engine integrated into every system and application, making it accessible to users at all times.
The interface consists of a character field and an additional text field. Optionally, you can also attach a photograph. It is not necessary to fill out all fields, and you are free to choose any of the three search options available, either individually or in combination. The main focus of the application is on drawing interaction, allowing you to visually express your thoughts using a color palette. There is also the option to label your drawing with a tagging tool.
With the new file format, the 3D object allows the user to view a realistic model with accurate proportions. This enhances the search by drawing, as the object can be viewed from all perspectives.
This project was carried out in collaboration with Lucie Wittmer and Lydia Frei at the Hochschule für Gestaltung, Schwäbisch Gmünd. The aim of the course “Invention Design” by Prof. David Oswald and Ulrich Barnhöfer was to research future-oriented technologies and design interactions that could emerge as these technologies mature.