Google bought GIPS, a company which had developed many components required for RTC, such as codecs and echo cancellation techniques.
Google open sourced the technologies developed by GIPS and engaged with relevant standards bodies at the IETF and W3C to ensure industry consensus.
In May 2011, Ericsson built the first implementation of Web RTC.
Web RTC implemented open standards for real-time, plugin-free video, audio and data communication.
Each Media Stream Track has a kind ('video' or 'audio'), and a label (something like 'Face Time HD Camera (Built-in)'), and represents one or more channels of either audio or video.
In this case, there is only one video track and no audio, but it is easy to imagine use cases where there are more: for example, a chat application that gets streams from the front camera, rear camera, microphone, and a 'screenshared' application.
One of the last major challenges for the web is to enable human communication via voice and video: Real Time Communication, RTC for short.
RTC should be as natural in a web application as entering text in a text input.
Note that screen capture requires HTTPS and should only be used for development due to it being enabled via a command line flag as explaind in this discuss-webrtc post.
Web RTC uses RTCPeer Connection to communicate streaming data between browsers (aka peers), but also needs a mechanism to coordinate communication and to send control messages, a process known as signaling.
The need was real: The guiding principles of the Web RTC project are that its APIs should be open source, free, standardized, built into web browsers and more efficient than existing technologies.