Skip to content

Webpage-Device talking

Feature

It is possible to communicate between the gas detector and the monitoring webpage. To implement the Webpage-Device talking feature, there is something necessary to prepare.

Media server

There need to be a deployed media server to route media streams. The media server should support streams published in both rtsp and webrtc protocol, by which gas detectors and webpages publish their video and audio streams.

We recommend Mediamtx as the media server implementation.

Frontend script

Let's simplify the workflow to talk between devices and webpages in two steps:

  • Publish voices captured from both end to the media server.
  • Read the published stream in the other end and play it in its speaker.

We have done the part of capturing and playing on the device. So that it's as simple as emitting a speaking event in socket.io server to the device connected to it to arouse audio capture and playback in the device as long as you have properly set up the rtspHost and rtspPort config.

The other half job is a bit of challenge:

  1. Capture audio devices via browsers.
  2. Stream it to the media server (by WebRTC protocol).

It is highly recommended to gain some basic knowledge of WebRTC to write the frontend script and implement the talking feature.

First, get access to audio devices in browsers:

const stream = await navigator.mediaDevices.getUserMedia({
  // we need only audio devices
  audio: true,
});

Create an instance of RTCPeerConnection:

const iceConf = {
  // Choose an existing turn server or deploy your own.
  // refer to https://webrtc.org/getting-started/turn-server
  iceServers: [
    {
      urls: "turn:yourturn.server:3478",
      username: "name",
      credential: "cred",
    },
  ],
};
const pc = new RTCPeerConnection(iceConf);

Generate your offer and post with it to the media server to get the answer:

stream.getTracks().forEach((track) => {
  pc.addTrack(track, stream);
});

const offer = await pc.createOffer();
const res = await fetch(
  // WHIP is short for WebRTS-HTTP ingestion protocol which is supported by Mediamtx.
  // Reference: https://www.ietf.org/archive/id/draft-ietf-wish-whip-01.html
  `https://yourmedia.server:8889/channel/audio/${devname}/whip`,
  {
    method: "POST",
    headers: {
      "Content-Type": "application/sdp",
    },
    body: offer.sdp,
  }
);
await pc.setLocalDescription(offer);

const sdp = await res.text();
const answer = new RTCSessionDescription({
  type: "answer",
  sdp,
});
await pc.setRemoteDescription(answer);

Know that it's just the simplest snippets as an example of streaming audio from webpage. You are sure to encounter enormous debuggings if you are to make it available in production.