Visual Search for Mobile Commerce in Action: Shnap

Experimenting with a new mobile visual search tool that leverages AI to build a next-gen shopping experience

AI is changing the way we shop, and just in time. Among these changes, we’re seeing that the need for intuitive and intelligent mobile commerce solutions has never been more stark. In the U.S., retail sales were down 8.1% year-over-year as of June 2020—to be expected as consumers stay home and save during tenuous economic times (to say the least).

While a brick-and-mortar rebound will be necessary for the industry to fully recover, there are other ways retailers can create immersive shopping experiences without taking on the risks of re-opening or returning to normal in-store customer volumes.

One of those ways is by implementing visual search.

Let me define visual search with a quick example. I have this shirt that I really like…

But I can’t for the life of me remember where I got it, and the tag with the manufacturer/brand got torn off. I want to get another one like it (doesn’t have to be an exact copy, I just like the style), but how would I do that? I could Google search—something like, Dark blue shirt with white sprinkle pattern.

Not quite. I could try some other combinations, but at that point I should probably just start clicking through results on some of my preferred clothing retailers. Do-able, but not the most pleasant shopping experience, especially since that wandering won’t include being able to try anything on—double whammy.

This, in a nutshell, is the problem visual search intends to solve. Not necessarily with the torn tag and my memory issues, but the problem of a user knowing what something (i.e. a product) looks like and having no discernible way to find it in the world.

Visual search, when it comes to e-commerce, works via a computer vision-based machine learning model that essentially finds patterns in an input image (i.e. the search) and returns results from a given product catalog that contain similar visual patterns.

This kind of technology used to be limited to desktop and cloud-based applications. But as smartphone chipsets become more powerful and more dedicated to AI tasks, it’s now possible to create these kinds of visual search experiences directly inside mobile apps.

And today, I want to introduce you to a new mobile visual search tool that gives us some insight into the future of mobile commerce—or at least a core part of that future.

Shnap: Visual Search for Mobile and the Browser

The above image pretty much sums up the core functionality of Shnap. You can’t find all the stores selling every item, but you can navigate a pretty robust selection from the catalogues of a solid group of partner retailers.

The most impressive thing to me about Shnap is its incredibly simple interface and user experience: Take any image — either captured or from your camera roll — and instantly access a listing of visually similar products (clothing, apparel, and accessories) from a variety of retailers and marketplaces (i.e. Poshmark).

Additionally, users can choose to share their photos with the Shnap community, allowing other users to scroll through an “Explore” interface that catalogues all other “Shnaps” users have uploaded.

I’d imagine, given the forced user opt-in, that this Explore feature doubles as a way for Shnap to collect real-world data and improve their AI model(s) over time, but it’s also a user experience with quite a bit of potential. Right now it’s a bit narrow in its functionality (no filtering, no sorting, etc.), but I’d expect future versions of the app to improve this feature, perhaps even tailoring Explore results to individual users.

Putting Shnap to the Test

Shall we?

Here’s a look at the results when I used the above picture of my favorite (anonymous) shirt as my visual search input in Shnap. Keep in mind the initial images we got with a text-based search, and that this is all happening within an iOS app (on an iPhone SE):

I chose to upload a photo from my camera roll, but as aforementioned, Shnap also allows you to capture a photo with your device camera (currently, only the back-facing camera seems to be supported).

But as you can see, the visual search results are much better than the results from an approximated text-based Google search. While I’m not personally prepared to buy a ~$1,300 Dolce and Gabana shirt, there’s enough range in brands and price to appeal to a variety of potential customers.

The core technology of visual search is already here. Computer vision models that make it possible are powerful enough to identify and process discrete and intricate visual patterns, and they’re small enough to fit inside mobile applications.

What we should see in the coming revolution of AI-powered mobile visual search, in addition to more powerful and accurate models, is innovative user experiences that feature even simpler and more elegant user interfaces layered on top of the AI.

In Shnap’s case, it’ll be interesting to see if their team is able to craft a seamless experience around their impressive visual search component—a forward-facing camera, a more robust and personalized community experience, and maybe even something wacky like a way to create your own style and see what’s out there.

Regardless, with the ways in which our economies and consumer habits are rapidly changing, we can expect Shnap and other mobile-based visual search experiences to transform what’s possible with mobile commerce in the coming months and years.

Avatar photo

Fritz

Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *