HOW DEEP IT CAN KNOW ME?
Artificial Intelligence (AI) as we know it today, has made great strides in making Netflix or Spotify learn more about their users preferences. It can recommend movies and music by analyzing what you have seen or heard, even browsed through. Basically, collaborative filtering has worked wonders to date.
What is it, you may ask?
Collaborative filtering, also referred to as social filtering, filters information by using the recommendations of other people. It is based on the idea that people who agreed in their evaluation of certain items in the past are likely to agree again in the future. Let’s say Alice and Bob have similar interests in video games. Alice recently played and enjoyed the game X. Bob has not played this game, but because the system has learned that Alice and Bob have similar tastes, it recommends this game to Bob. In addition to user similarity, recommendation systems can also perform collaborative filtering using item similarity (“Users who liked this item also liked X”).
The problem with all this is that we are typically seeing a recommendation built for a user set, and not for a particular user.
A particular user might like the cinematography, the story, the background or the dialogues of a movie, of a certain genre. But there also might be a movie which might be similar to the taste of the user, in one or more than one of his ‘like’ factor, but in sci-fi genre.
Movix.ai is typically leveraging on its long short-term memory based neural network created by using Tensor Flow framework. They have made their recommendation engine interactive, and utilize tags along with movie preferences that are being chosen by users, at real time. Since they don’t require registration, the history of the users is not mined , rather the taste or mood of the users is considered in real time. This gives a more appropriate recommended movie list relevant to the user’s current situation; the occasion, the mood, or any other factor that the user implies when choosing the movies.
Spotify has acquired Niland, which has powerful deep learning algorithms to analyze musical tracks and classify them into a number of labels. These labels will then produce search result sets of various types. The api can be integrated in any music platform which in turn will curate the track content, add descriptive tags and create new experiences by generating playlists according to what the user is listening, on the go.
Recommendation engines of Netflix, Amazon etc. have now moved to the neural networks, to start thinking like the human brain and recommend music and movies we are actually searching for.
It’s supposed to understand the difference that a user who likes “House of Cards”, may not like “The West Wing” or “Scandal”. Or simply, if I love “Friends”, I might not even like the concept of How I Met Your Mother.
Pandora, on other hand has taken a different way of creating a quality recommendation engine, powered by Music Genome Project, employing musicologists to segregate musical characteristics and tag each song which matches the musical characteristic or “genes”. The engine uses filtering techniques, matches the genes of the user’s current listening track with their gene set or consider such characteristics that we can’t even fathom to be the most relevant factors to produce a recommended playlist.
The coolest features of these recommendation engines would be to understand user’s preference of the tonal quality or softness of the singer, the rhythm or the musical instruments that they enjoy, or watching a certain emotional scene of a particular context, enacted wonderfully by the actors.
This will be a reality soon, with all the deep learning that is taking place.. After all, AI is now creating trailers, and writing down quirky scripts. Even evading human by talking among themselves in their own created languages.