/ Efficiency / Education /


Watch 2 Ted Talks simultaneously at normal speed and effectively consume the information from both

How about crafting a movie out of 2 videos?


Do you remember the story of last Avengers movie? Vividly, right?
But if you analyse that movie, it’s nothing but a collection of events happening in parallel but it is stitched (represented) in such a way that you are able to absorb it all and connect it nicely.

Similarly, we can develop a machine learning system where we can automatically stitch 2 videos intermittently with similar context so a user can see, understand and absorb the content from both videos.

Ex. Like if a user is interested in improving his “Body language”, we can stitch together https://www.youtube.com/watch?v=eIho2S0ZahI & https://www.youtube.com/watch?v=Ks-_Mh1QhMc while merging the context of each video. We can have a neural network implementation to improve the pair selection (or maybe 3 videos).

In this way, one would be able to spend less time and absorb in parallel.

Visual/ Auditory synchronization


Producing / editing simultaneous videos with learned audiovisual cues.
“Left” lecture correlates with “Right” lecture on a subject.
Left video has either audible or visual cues directing attention to video Right, etc, so forth, only omitting the least substantial takeaways for memory retention.

Closed Captioning and Annotations


This solution focuses on two things::
1. Annotations
The way educational videos are represented, their content is structured and flows with a certain logic. So these videos need to be annotated in a way that the topic and the main keywords to each topic being spoken about gets displayed on screen as they are being spoken about. The effect is the same as a keynote presentation.

2. Closed Captioning
CCs are important because they facilitate a reading comprehension added to the audio. when you miss some word, having it immediately available on screen lets you comprehend what was said.

Combine the two, they have the effect of flash cards during interviews and speeches. If you’re watching one video and focusing more on the annotations, you can possibly comprehend the audio of the other video. The brain then works for the two videos the same way as talking to someone on the phone and texting at the same time.

One in audio and one in graphic


There are 2 way i can think of:
1) Process one of the talks into infographic or text and display it. While the 2nd one remain in audio.

2) For bilingual people, simply listen to 2 talks that are conducted in 2 different languages that they are good at.

3D Glasses Left / Right Split


This is most likely going to simply result in people throwing up, but here goes. Using the same technology used to send a different image to your left eye and right eye while watching a 3D movie, I want to send one video to your left eye and one to your right. Audio can then be consumed through stereo headphones with one video’s audio sent to your left and the other to your right. From what I understand, 99% of people have a dominant eye and a dominant ear. I believe it would be best to make sure both dominants are looking / listening to different videos, in order to prevent one video being consumed and the other being ignored.

I’ll put together a simple version when I work out how to get the alternate imaging working.

Submit your own idea!

Submit your idea to earn a future revenue share!

+ Submit Idea