Facebook’s Artificial Intelligence Has The Ability to Search Photos by Content

The term artificial intelligence was coined 60 year ago. But now its starting to deliver. Lumos’s computer vision platform was initially used to improve the experience for visually impaired members of the Facebook community. Lumos is now powering image content search for all users. What does this means to you? You can now search for images on Facebook with key words that describe the contents of a photo, rather than being limited by tags and captions.

How does this work? It starts with the huge task of computational training. For the object recognition used in Facebook’s image search, the artificial intelligence (AI) system started with a small set of 130,000 public photos shared on Facebook. Using the annotated photos the system could learn which pixel patterns correspond to particular subjects. It then went on to use the tens of millions of photos on Facebook. So what this means is that the caption-reading technology trained a deep neural network though public photos shared on Facebook. The model essentially matches search descriptors to features pulled from photos with some degree of probability. You can now search for photos based on Facebook AI’s assessment of their content, not just based on how humans happened to describe the photos with text when they posted them.

How could this be used? Say you were searching on a dress you really liked in a video. Using the search it could be related back to something on Marketplace or even connect you directly with an ad-partner to improve customer experiences while keeping revenue growth afloat. So it seems it can help both customers, customer experience and companies selling things as well as ad partners.

What else is new? Facebook released the text-to-speech tool last April for visually impaired users so they could use the tools to understand the contents of photos. Then, the system could tell you that a photo involved a stage and lights, but it wasn’t very good at relating actions to objects. But now the Facebook team has improved that painstakingly labeling 130,000 photos pulled from the platform. Facebook trained a computer vision model to identify 12 actions happening in the photos. So for instance, instead of just hearing it was “a stage,” the blind person would hear “people playing instruments” or “people dancing on a stage” or “people walking” or “people riding horses.” This provides contextually relevancy that was before not possible.

You could imagine one day being able to upload a photo of your morning bagel and this technology could identify the nutritional value of that bagel because we were able to detect, segment, and identify what was in the picture.

So it seems the race is on for services not just for image recognition, but speech recognition, machine-driven translation, natural language understanding, and more. What’s your favorite AI vendor?

@Drnatalie, VP, Program Executive, Salesforce ITC

Share

Amazon Go – A Retailer Using AI, ML and Vision Technology

The idea of Amazon Go is to weave into the shopping experience the capabilities of deep learning algorithms, Artificial Intelligence (AI), Machine Learning (ML) and sensor vision. A practical application of AI and ML is Amazon Go via advanced shopping experiences. The ability to walk into a store, grab what you want and walk out, never having to wait in-line: no checkout lanes, no registers. For many customers, especially after work when they are tired and just want to get home or during the holidays could be a much better customer experience.

So how does this work? A customer opens up the Amazon Go app on their smart phone and scans their personalized bar code as they walk into a store. The phone goes into your pocket or purse and the customer begins their shopping. As the customer picks up a product, it’s added to their total. If a customer decides they don’t want an item, replacing back on the shelf removes it from their total.  Amazon Go calls it “just walk out technology” for the modern shopper. Once you are done shopping and leave the store, the total is calculated and charged to the customer’s amazon.com card.

From the customer’s point of view, while on-line shopping has increased, some customers still like the idea of going to a store and touching / seeing the merchandise. To help ensure that brick and mortar stores don’t turn into showrooms (where customers go to look at merchandise and then search on their phones for a better online deal (from that or other retailers) and buy it online while standing in their store, technologies like Amazon Go provide convenience. Perhaps the thought and the hope is that the convenience will be more important than searching for a cheaper price and buying it online.

Showrooming can be very frustrating for brick and mortar stores and put some of them out of business. It’s interesting that the online and offline shopping worlds are colliding. Fresh goods have a short shelf-life and often thought of as poor candidates for online shopping because of their perishable nature. However, it’s a high margin area that Amazon wants to tackle by using brick and mortar stores and the convenience of shop and go. Younger generals don’t have the tolerance for standing in line.

The future of shopping is just getting more and more interesting as the new technologies get implemented.

@DrNatalie Petouhoff, VP and Principal Analyst, www.Constellationr.com

Covering Customer-Facing Applications

Share