Let me ask you this question, how many time it has occurred to you that when you are researching for some topic for your personnel use or professional use, your eyes are more attracted towards the visual side of content. In its simplest form, it starts from a phrase-based google search and then when the search results appear you choose among the top 3 results, if videos in thumbnail the best and if image better and then normal text-based result. Further, if it’s a text-based result like a post you would give it a quick scan, trying to find any visual-based media that ease the process.

Even in the passive research mode, your brain is fiddling with ideas and you visit social media or YouTube to get random ideas related to your thought. Here the magic of Machine learning recommendations even drives you deeper like never before and Eureka !!!, an idea sparks.

It's totally true that our brain loves visual content and can memories it faster than a text-based read. Your eyes wanting to jump to the juice where the actual answers lie and skip the other climax part.

In the past people used to be good readers( some of them still are ) where they were more exposed to the newspaper, journals, periodicals and thick chunk of insights for which you really needed to put effort to find an answer.

That can now be done with not even touching anything( just saying hey Siri ! or ok google !). The millennials have been more exposed to machine learning-based auto-generated feed which gives you the visual content of your interest without telling it. 100 of data points ( Listeners ) observing your next move and recommending content accordingly.

Hence the research consumption for millennials need to be redesigned.

When I am doing particular research, my eyes are continuously looking for comparison tables, graphs, pie chart, infographics, videos and similar pictorial content that could at one glance tell me what I want to know about. That’s hardly possible and to get those answers, I have to really move around back and forth between pages, watch some videos and then might be I have some ideas.

Imagine that process simplified. You begin with your question asked through a voice-based search assistant in your language, a natural language processor rephrases it finds various other ways to ask the same question and then using elastic deep search, finds related content that is self-organized, tagged and categorised by the system. The results are also arranged according to your taste and interest that is developed using recommendation systems and finally, all the result pieces are put together just for you as you liked and everything in front of you, like in Iron man movie.

Example of result screen

Example of search result screen

If something isn’t according to you, you have the option to reorganise the visual web-based result and machine learning self learns what wrong it did and performs a better organisation next time.

Content types are also visual-based including but not limited to –

  1. Pictures
  2. Videos
  3. Interactive graphs and statistical data
  4. Tables and charts
  5. Slide/ Stories
  6. Podcasts
  7. Web streams
  8. News snippets
  9. Global live trends

Now when you can see everything in a single site, you can relate , compare, drop or choose opinions, put selected to your collection for your next presentation or report and boom the thing that took hours from web scrabbling and making notes trying to download and remember all these results is at your fingertips in a matter of minutes.

On the other hand, the system is trying to identify the questions and keywords that have less content available or haven’t been addressed yet informing the content creators to create them and getting organized automatically. This is easy even for the research creator because he doesn’t need to worry about the whole package of long report or presentation but just the individual dynamic content related to a properly defined string keyword.

Eg phrase - “4G adoption rate”

As the system becomes smarter it doesn’t even need a human hand and creates content on itself using the vast data available outside its knowledge base ( the internet). Various other sources of data can directly be pipelined to its knowledge base where it repurposes, redesigns and organised It for next usage.

We can go even deeper where you don’t need to look into your screen but wear an AR/VR headset and dive into the research. Move around as you are moving in data streets. But this exploration for our next read.