Html HTML5 捕获并保存视频

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/17331531/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-29 10:31:40  来源:igfitidea点击:

HTML5 capture and save video

javascripthtmlvideo-streaminggetusermedia

提问by scalen121

I am building a site for sing-alongs where the user can capture a video of themselves singing along with an mp3. I have gotten to the point that I can access the camera and display the live stream, but how can i save the video so the user can download and keep it?

我正在建立一个用于唱歌的网站,用户可以在其中捕获自己唱歌的视频和 mp3。我已经可以访问相机并显示实时流,但是如何保存视频以便用户可以下载并保留它?

My code:

我的代码:

<!DOCTYPE html>
<head>
<link href="css/bootstrap.css" rel="stylesheet"">
<style>
#container {
margin: 0px auto;
width: 500px;
height: 375px;
border: 10px #333 solid;
}
#videoElement {
width: 500px;
height: 375px;
background-color: #666;
}
</style>
</head>
<body>

<button class="btn" onclick="show();">Record!</button>


<div id="record" style="display:none; text-align:center;">
<div id="container">
<video autoplay="false" id="videoElement">
</video>
</div>
<button id="play" class="btn" onclick="play()">Start Recording!</button>
<audio id="song" style="hidden">
<source src="love.mp3" type="audio/mpeg">
Your browser does not support the audio element.
</audio>
</div>



<script src="http://code.jquery.com/jquery.js"></script>
<script src="js/bootstrap.js"></script>
<script>

var video = document.querySelector("#videoElement");

navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia ||    navigator.mozGetUserMedia || navigator.msGetUserMedia || navigator.oGetUserMedia;

if (navigator.getUserMedia) {       
navigator.getUserMedia({video: true, audio: true}, handleVideo, videoError);
}

function handleVideo(stream) {
video.src = window.URL.createObjectURL(stream);
document.getElementById("videoElement").pause();
}

function videoError(e) {
alert("There was an error with the video stream.\nCheck that your webcam is connected.");
}

function play()
{
var video = document.getElementById("videoElement");
var music = document.getElementById("song");
   var button = document.getElementById("play");
   if (video.paused) {
      video.play();
      music.play();
      button.textContent = "Stop Recording";
   } else {
      video.pause();
      music.pause();
      button.textContent = "Continue Recording";
   }
}

function show()
{
document.getElementById("record").style.display="block";
}
</script>
</body>

is there a way in handleVideo i can save the stream as it comes or something?

有没有办法在 handleVideo 中保存流,因为它来了还是什么?

采纳答案by scalen121

UPDATE 12/2014FYI, there is a new API on its way called MediaRecorder. Currently only supported in Firefox, and in an early state, but something to have in mind.

更新 12/2014仅供参考,有一个名为MediaRecorder的新 API 。目前仅在 Firefox 中受支持,并且处于早期状态,但需要注意一些事项。

mediaStream and local storage

mediaStream 和本地存储

In pure local environment you can't and won't get a very good result. You can save out the frames using the canvas element by drawing onto it and store jpeg images to a local storage from the video stream together with the audio (which must be saved separately) and then in post use a library to create for example a MJPEG file (AFAIK there isn't currently any that supports audio).

在纯粹的本地环境中,你不能也不会得到很好的结果。您可以通过在画布元素上绘制来保存帧,并将 jpeg 图像与音频(必须单独保存)一起从视频流存储到本地存储,然后在后期使用库创建例如 MJPEG文件(AFAIK 目前没有任何支持音频的文件)。

You will run into several issues with this approach however: it will take much time to use JavaScript to process all this information - just saving a frame as jpeg, converting it to blob and save it to the file system or indexed DB will consume most (or more) of the time budget you have available for a single frame.

但是,使用这种方法会遇到几个问题:使用 JavaScript 处理所有这些信息需要很多时间 - 只需将帧保存为 jpeg,将其转换为 blob 并将其保存到文件系统或索引数据库将消耗最多(或更多)您可用于单个帧的时间预算。

You will not be able to synchronize the video frames with the audio properly - you can save a time-stamp and use that to "correct" the frames but your FPS will most likely vary creating a jerky video. And even if you get the sync in somewhat order time-wise you will probably face problems with lag where audio and video does not match as they are initially two separate streams.

您将无法正确地将视频帧与音频同步 - 您可以保存时间戳并使用它来“校正”帧,但您的 FPS 很可能会发生变化,从而创建生涩的视频。即使您按时间顺序获得同步,您也可能会遇到延迟问题,其中音频和视频不匹配,因为它们最初是两个单独的流。

But videos are very rarely above 30 FPS (US) or 25 FPS (Europe) so you won't need the full 60 FPS rate the browser mayprovide. This gives you a little better time budget of about 33 milliseconds per frame for US (NTSC) system and a little more if you are in a country using the PAL system. There is nothing wrong using an even lower frame rate but at a certain point (< 12-15 FPS) you will start noticing severe lack of smoothness.

但是视频很少超过 30 FPS(美国)或 25 FPS(欧洲),因此您不需要浏览器可能提供的完整 60 FPS 速率。对于美国 (NTSC) 系统,这为您提供了每帧约 33 毫秒的更好的时间预算,如果您在使用 PAL 系统的国家/地区,则更多。使用更低的帧速率并没有错,但在某个点(< 12-15 FPS)您会开始注意到严重缺乏平滑度。

There are however many factors that will influence this such as the CPU, disk system, frame dimension and so forth. JavaScript is single threaded and canvas API is synchronous so a 12-core CPU won't help you much in that regard and Web Workers' usefulness is currently limited pretty much to more long-running tasks. If you have a lot of memory available you can cache the frames in-memory which is do-able and do all processing in post which again will take some time. A stream recorded at 720P @ 30 FPS will consume minimum 105 mb per second (that's just raw data not including the browser's internal handling of buffers which may double or even triple this).

然而,有许多因素会影响这一点,例如 CPU、磁盘系统、框架尺寸等。JavaScript 是单线程的,canvas API 是同步的,所以 12 核 CPU 在这方面帮不上什么忙,而 Web Workers 的用处目前仅限于更长时间运行的任务。如果您有大量可用内存,您可以将帧缓存在内存中,这是可行的,并在后期进行所有处理,这又需要一些时间。以 720P @ 30 FPS 记录的流将消耗至少 105 mb/秒(这只是原始数据,不包括浏览器的内部缓冲区处理,这可能是其两倍甚至三倍)。

WebRTC

实时时钟

A better solution is probably be to use WebRTC and connect to a server (external or local) and process the stream there. This stream will contain synchronized audio and video and you can store the stream temporary to disk without the limitations of a browser sand-boxed storage area. The drawback here will be (for external connections) bandwidth as this may reduce the quality, as well as the server's capability.

更好的解决方案可能是使用 WebRTC 并连接到服务器(外部或本地)并在那里处理流。此流将包含同步的音频和视频,您可以将流临时存储到磁盘,而不受浏览器沙盒存储区域的限制。这里的缺点是(对于外部连接)带宽,因为这可能会降低质量以及服务器的能力。

This opens up the possibility to use for example Node.js, .Net or PHP to do the actual processing using third-party components (or more low-level approach such as using compiled C/C++ and CGI/piping if you're into that).

这开辟了使用例如 Node.js、.Net 或 PHP 来使用第三方组件进行实际处理的可能性(或者更底层的方法,例如使用已编译的 C/C++ 和 CGI​​/管道,如果您喜欢的话)那)。

You can check out this open source project which supports recoding of WebRTC streams:
http://lynckia.com/licode/

您可以查看这个支持 WebRTC 流重新编码的开源项目:http:
//lynckia.com/licode/

The Licode project provides a NodeJS client API for WebRTC so that you can use it on the server side, see the docs

Licode 项目为 WebRTC 提供了一个 NodeJS 客户端 API,以便您可以在服务器端使用它,请参阅文档

And this is basically how far as you can go with current state of HTML5.

这基本上是您可以在 HTML5 的当前状态下走多远。

Flash

闪光

Then there is the option of installing Flash and use that - you will still need a serverside solution (Red5, Wowza or AMS).

然后可以选择安装 Flash 并使用它 - 您仍然需要一个服务器端解决方案(Red5、Wowza 或 AMS)。

This will probably give you a less painful experience but you need to have Flash installed in the browser (obviously) and in many cases there is a higher cost factor due to licenses (see Red5 for a open-sourcealternative).

这可能会给您带来不那么痛苦的体验,但您需要在浏览器中安装 Flash(显然),并且在许多情况下,由于许可证的原因,成本因素更高(请参阅Red5 以获取开源替代方案)。

If you are willing to pay for commercial solutions there are solutions such as this:
http://nimbb.com/

如果您愿意为商业解决方案付费,则有如下解决方案:http:
//nimbb.com/

回答by Vikash Kumar

Here is Fully Working code for capturing a video and saving it to local:

这是用于捕获视频并将其保存到本地的完全工作代码:

It needs permission like saving files,camera and mic:

它需要像保存文件、相机和麦克风这样的权限:

<html>
    <div class="left">
        <div id="startButton" class="button">
        Start
        </div>
        <h2>Preview</h2>
        <video id="preview" width="160" height="120" autoplay muted></video>
    </div>

    <div class="right">
        <div id="stopButton" class="button">
        Stop
        </div>
        <h2>Recording</h2>
        <video id="recording" width="160" height="120" controls></video>
        <a id="downloadButton" class="button">
        Download
        </a>
    </div>

    <script>

    let preview = document.getElementById("preview");
    let recording = document.getElementById("recording");
    let startButton = document.getElementById("startButton");
    let stopButton = document.getElementById("stopButton");
    let downloadButton = document.getElementById("downloadButton");
    let logElement = document.getElementById("log");

    let recordingTimeMS = 5000;


    function log(msg) {
        //logElement.innerHTML += msg + "\n";
    }

    function wait(delayInMS) {
        return new Promise(resolve => setTimeout(resolve, delayInMS));
    }

    function startRecording(stream, lengthInMS) {
        let recorder = new MediaRecorder(stream);
        let data = [];

        recorder.ondataavailable = event => data.push(event.data);
        recorder.start();
        log(recorder.state + " for " + (lengthInMS/1000) + " seconds...");

        let stopped = new Promise((resolve, reject) => {
        recorder.onstop = resolve;
        recorder.onerror = event => reject(event.name);
        });

        let recorded = wait(lengthInMS).then(
        () => recorder.state == "recording" && recorder.stop()
        );

        return Promise.all([
            stopped,
            recorded
        ])
        .then(() => data);
    }

    function stop(stream) {
        stream.getTracks().forEach(track => track.stop());
    }

    startButton.addEventListener("click", function() {
        navigator.mediaDevices.getUserMedia({
            video: true,
            audio: false
        }).then(stream => {
                preview.srcObject = stream;
                downloadButton.href = stream;
                preview.captureStream = preview.captureStream || preview.mozCaptureStream;
                return new Promise(resolve => preview.onplaying = resolve);
              }).then(() => startRecording(preview.captureStream(), recordingTimeMS))
              .then (recordedChunks => {
                let recordedBlob = new Blob(recordedChunks, { type: "video/webm" });
                recording.src = URL.createObjectURL(recordedBlob);  
                downloadButton.href = recording.src;
                downloadButton.download = "RecordedVideo.webm";

                log("Successfully recorded " + recordedBlob.size + " bytes of " +
                    recordedBlob.type + " media.");
              })
              .catch(log);
        }, false);


        stopButton.addEventListener("click", function() {
        stop(preview.srcObject);
        }, false);

    </script>
</html>

Reference:Recording a media element

参考:录制媒体元素

回答by cocco

the stream is created here.

流在此处创建。

function handleVideo(stream) {
 video.src = window.URL.createObjectURL(stream);
 document.getElementById("videoElement").pause();
}

your data is the sream .. or window.URL.createObjectURL(stream).

您的数据是 sream .. 或 window.URL.createObjectURL(stream)。

but you can't simply write stream or window.URL.createObjectURL(stream) to a localstorage (2mb.. to small) or webkitRequestFileSystem (wich is allow you gb's) ... you need to read the data outputed to the video tag and convert it to canvas as a single frame saving that to the webkitfilesystem.

但你不能简单地将流或 window.URL.createObjectURL(stream) 写入本地存储(2mb .. 到小)或 webkitRequestFileSystem(允许你使用 gb)......你需要读取输出到视频标签的数据并将其转换为画布作为单个框架将其保存到 webkitfilesystem。

as the filesystem changed lately i googled for the new code and found this perfect example for you. https://gist.github.com/piatra/2549734

由于文件系统最近发生了变化,我在 google 上搜索了新代码,并为您找到了这个完美的示例。 https://gist.github.com/piatra/2549734

in the example he uses

在他使用的例子中

setTimeout(function(){ draw(v, bc, w, h); }, 200);

which writes a frame every 200ms

每 200 毫秒写入一帧

if you want a custom framrate just change the 200ms to 1000/25 ..(25fps)

如果您想要自定义帧率,只需将 200ms 更改为 1000/25 ..(25fps)

or use requestanimationframe and you should get around 60fps if ur cpu supports that .

或者使用 requestanimationframe 如果你的 cpu 支持,你应该得到大约 60fps。

now you don't have a real stream in a nice format like mp4 ... but lots of frames thatyou can display with another function ... again you need a really fast cpu.

现在你没有像mp4这样好的格式的真实流......但是你可以用另一个功能显示很多帧......你再次需要一个非常快的cpu。

in this example audio does not work.

在这个例子中,音频不起作用。

to record also the audio to WAV (you can't record mp3 or aac)... i found this.

还要将音频录制到 WAV(您不能录制 mp3 或 aac)...我找到了这个。

http://typedarray.org/from-microphone-to-wav-with-getusermedia-and-web-audio/

http://typedarray.org/from-microphone-to-wav-with-getusermedia-and-web-audio/

so at theend you could do that... but it would take alot of space for just some minutes and needs a very fast cpu to elaborate everything.

所以最后你可以这样做......但它会在几分钟内占用大量空间,并且需要一个非常快的 CPU 来详细说明一切。