All posts by silvia

AppRTC : Google’s WebRTC test app and its parameters

If you’ve been interested in WebRTC and haven’t lived under a rock, you will know about Google’s open source testing application for WebRTC: AppRTC.

When you go to the site, a new video conferencing room is automatically created for you and you can share the provided URL with somebody else and thus connect (make sure you’re using Google Chrome, Opera or Mozilla Firefox).

We’ve been using this application forever to check whether any issues with our own WebRTC applications are due to network connectivity issues, firewall issues, or browser bugs, in which case AppRTC breaks down, too. Otherwise we’re pretty sure to have to dig deeper into our own code.

Now, AppRTC creates a pretty poor quality video conference, because the browsers use a 640×480 resolution by default. However, there are many query parameters that can be added to the AppRTC URL through which the connection can be manipulated.

Here are my favourite parameters:

  • hd=true : turns on high definition, ie. minWidth=1280,minHeight=720
  • stereo=true : turns on stereo audio
  • debug=loopback : connect to yourself (great to check your own firewalls)
  • tt=60 : by default, the channel is closed after 30min – this gives you 60 (max 1440)

For example, here’s how a stereo, HD loopback test would look like: .

This is not the limit of the available parameter, though. Here are some others that you may find interesting for some more in-depth geekery:

  • ss=[stunserver] : in case you want to test a different STUN server to the default Google ones
  • ts=[turnserver] : in case you want to test a different TURN server to the default Google ones
  • tp=[password] : password for the TURN server
  • audio=true&video=false : audio-only call
  • audio=false : video-only call
  • audio=googEchoCancellation=false,googAutoGainControl=true : disable echo cancellation and enable gain control
  • audio=googNoiseReduction=true : enable noise reduction (more Google-specific parameters)
  • asc=ISAC/16000 : preferred audio send codec is ISAC at 16kHz (use on Android)
  • arc=opus/48000 : preferred audio receive codec is opus at 48kHz
  • dtls=false : disable datagram transport layer security
  • dscp=true : enable DSCP
  • ipv6=true : enable IPv6

AppRTC’s source code is available here. And here is the file with the parameters (in case you want to check if they have changed).

Have fun playing with the main and always up-to-date WebRTC application: AppRTC.

UPDATE 12 May 2014

AppRTC now also supports the following bitrate controls:

  • arbr=[bitrate] : set audio receive bitrate
  • asbr=[bitrate] : set audio send bitrate
  • vsbr=[bitrate] : set video receive bitrate
  • vrbr=[bitrate] : set video send bitrate

Example usage:

2011, 23rd March, Google Tech Talk: “HTML5 video accessibility and the WebVTT file format”

An introduction into the WebVTT (Web Video Text Tracks) file format and how it is used to provide captions, subtitles and text video descriptions for HTML5 video. WebVTT is a simple line-based file format, which is currently in development at the WHATWG and under consideration for native implementation in the major browsers using the HTML5 element.


YouTube video

YouTube video with audio descriptions

2012, Jan 16th, LCA Multimedia Miniconf: “HTML5 Video Accessibility Update”


You would think that publishing video on the Web – YouTube-style – is all that is required to make video a core part of the Web. Far from it: work on accessibility features and synchronization of several audio and video resources has been ongoing since video was introduced into HTML5. One browser has rolled out preview support for captions, others are close. But other features are still in the works. Silvia will give us an update about the latest developments at the WHATWG and W3C.


2011, Dec 1st, OZeWAI Conference: “HTML5 Video Accessibility”


HTML5 is widely considered to increase the accessibility of audio and video content for people with disabilities. Successful implementation of HTML5 can assist web professionals in reaching accessibility success criteria.

Dr Silvia Pfeiffer, HTML5 video accessibility expert and invited expert on four W3C working groups will be speaking about HTML5 video and audio accessibility at the 2011 OZeWAI conference.


2011, Sep 19th, W3C Web and TV Workshop: “WebVTT in HTML5”


Position Statement
“WebVTT in HTML5 for video accessibiity and beyond”

The purpose of this statement is to give an introduction to WebVTT [1] to build a basis for discussion of requirements that come from TV use cases to time-synchronized Web applications.

HTML5 is offering a generic means to associate time-aligned text or metadata with audio and video resources through the new <track> element [2]. In theory, <track> can accept any number of file formats as input – similar to how <img>, <video> and <audio> can in theory accept any image, video or audio resource as input. In practice, however, the choice of file format is restricted by what the browser vendors will develop support for.

One particular format that has been custom developed for HTML requirements and is in the process or planned to be implemented by most modern desktop browsers is the WebVTT file format. WebVTT is short for Web Video Text Tracks. It is a line-based file format that simply supplies data to the audio or video element by time interval within the timeline of the media resource. The time-intervals are called “cues”. WebVTT thus provides a generic platform for time-synchronized application use cases around HTML5 audio and video.

The main use cases that motivated the creation of <track> and WebVTT are in accessibility to provide text alternatives along the timeline.

The use of WebVTT for captions and subtitles has been described in detail. WebVTT’s functionality compares to that of modern TV caption formats. Positioning and cue size are specified through cue settings. A subpart of CSS has been specified to be applicable to WebVTT cue styling. In this way, WebVTT can also be used outside Web browsers by applications that do not support a full CSS engine but can implement support for the small number of styling commands specified.

The specification of how to use WebVTT for DVD-style chapters has been detailed recently. It allows for a hierarchy of chapters with arbitrary depth, which is very useful for navigation purposes. When made keyboard accessible, this hierarchical access will satisfy the navigation needs of blind users, and is equally useful to any user.

WebVTT can also be used to provide text descriptions for media resources. While this has not been specified in depth yet, it is expected that in the first instance, WebVTT cues for text descriptions will be purely textual without markup. These can be rendered through
screen readers using browser accessibility APIs in a similar manner to how active regions are rendered. Further developments here are possible to e.g. provide prosody to voicing, even though that is not yet a typical feature supported by screen readers.

The <track> element and WebVTT have been developed also with uses cases beyond these mainly accessiblility-motivated requirements in mind. For these use cases there is a catch-all kind of track, which is called “metadata”. It can be used for any type of timed metadata or timed text use case. Examples of such use cases could be timed and positioned annotations (similar to how YouTube’s annotations work), timed geo-coordinates, or timed and positioned hyperlinks. The rendering has to be provided through JavaScript and as such, the way in which the data is specified will be custom and can take on any
form, including JSON or XML.

A WebVTT working group is in the process of creation at the W3C. Thus, given this understanding of existing capabilities of HTML5 for time-synchronized data, this position paper would like to explore what further standardisation needs we may be able to foresee.


[1] WebVTT specification:

[2] Track element:


2012, May 23rd, Web Directions Code: “Implementing Video Conferencing in HTML5”


Recently, a new spe­cific­a­tion was pro­posed that extends HTML5 with real-time com­mu­nic­a­tion cap­ab­il­it­ies. Web developers will be able to imple­ment video con­fer­en­cing in Web pages with just a few lines of JavaS­cript code. The Medi­aStream and Peer­Con­nec­tion objects provide some­thing fun­da­ment­ally dif­fer­ent from the tra­di­tional web: peer-to-peer con­nec­tions without an inter­me­di­ate relay. This present­a­tion will explain the new objects and show a demo of its imple­ment­a­tion in the Chrome Web browser.


2012, Oct 19th, Web Directions South: “WebVTT and video accessibility”


WebVTT is the “Web video text track” file format. You might think it’s just about captions and subtitles. But far from it: in HTML5 we’ve developed a comprehensive solution for any text-like data or events that occur relatively rarely along a video or audio element’s timeline, which also includes tracks for blind users. Most of the big browsers have by now implemented support for text tracks. We will look at captions, subtitles, video descriptions, and chapters, which all solve parts of the accessibility picture for HTML5 media and show you how you can make use of them.