图片的能力

来源:http://www.michaelspen.com 作者:html入门 人气:198 发布时间:2019-10-12
摘要:websocket探究其与话音、图片的手艺 2015/12/26 · JavaScript· 3 评论 ·websocket 初藳出处:AlloyTeam    提起websocket想比大家不会面生,假使素不相识的话也没提到,一句话总结 “WebSocket proto

websocket探究其与话音、图片的手艺

2015/12/26 · JavaScript · 3 评论 · websocket

初藳出处: AlloyTeam   

提起websocket想比大家不会面生,假使素不相识的话也没提到,一句话总结

“WebSocket protocol 是HTML5一种新的合计。它完结了浏览器与服务器全双工通讯”

WebSocket相比较守旧那叁个服务器推本事简直好了太多,我们得以挥手向comet和长轮询这一个本领说拜拜啦,庆幸大家生活在具有HTML5的一世~

那篇小说我们将分三部分索求websocket

率先是websocket的常见使用,其次是一心自身制作服务器端websocket,最后是第一介绍利用websocket制作的三个demo,传输图片和在线语音聊天室,let’s go

一、websocket常见用法

那边介绍二种本人感到大范围的websocket达成……(留意:本文创设在node上下文意况

1、socket.io

先给demo

JavaScript

var http = require('http'); var io = require('socket.io'); var server = http.createServer(function(req, res) { res.writeHeader(200, {'content-type': 'text/html;charset="utf-8"'}); res.end(); }).listen(8888); var socket =.io.listen(server); socket.sockets.on('connection', function(socket) { socket.emit('xxx', {options}); socket.on('xxx', function(data) { // do someting }); });

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
var http = require('http');
var io = require('socket.io');
 
var server = http.createServer(function(req, res) {
    res.writeHeader(200, {'content-type': 'text/html;charset="utf-8"'});
    res.end();
}).listen(8888);
 
var socket =.io.listen(server);
 
socket.sockets.on('connection', function(socket) {
    socket.emit('xxx', {options});
 
    socket.on('xxx', function(data) {
        // do someting
    });
});

深信不疑了然websocket的同班不恐怕不知情socket.io,因为socket.io太有名了,也很棒,它本身对逾期、握手等都做了管理。作者猜忌那也是促成websocket使用最多的法门。socket.io最最最优质的某些正是高贵降级,当浏览器不扶持websocket时,它会在个中高雅降级为长轮询等,客户和开垦者是没有供给关注具体完结的,很有利。

只是事情是有两面性的,socket.io因为它的健全也带来了坑的地点,最要害的正是臃肿,它的包裹也给多少拉动了比较多的报导冗余,而且高贵降级这一独到之处,也陪同浏览器标准化的拓宽逐级失去了伟大

Chrome Supported in version 4+
Firefox Supported in version 4+
Internet Explorer Supported in version 10+
Opera Supported in version 10+
Safari Supported in version 5+

在那处不是申斥说socket.io不佳,已经被淘汰了,而是一时候我们也足以思索部分任何的实现~

 

2、http模块

刚刚说了socket.io臃肿,那今后就来讲说便捷的,首先demo

JavaScript

var http = require(‘http’); var server = http.createServer(); server.on(‘upgrade’, function(req) { console.log(req.headers); }); server.listen(8888);

1
2
3
4
5
6
var http = require(‘http’);
var server = http.createServer();
server.on(‘upgrade’, function(req) {
console.log(req.headers);
});
server.listen(8888);

很简短的实现,其实socket.io内部对websocket也是那样达成的,不过后边帮大家封装了部分handle管理,这里大家也得以团结去丰硕,给出两张socket.io中的源码图

篮球世界杯投注盘口 1

篮球世界杯投注盘口 2

 

3、ws模块

末端有个例子会用到,这里就提一下,前边具体看~

 

二、本人达成一套server端websocket

刚巧说了三种常见的websocket完结方式,今后我们思量,对于开采者来讲

websocket绝对于守旧http数据交互情势以来,增添了服务器推送的事件,客户端接收到事件再扩充相应管理,开拓起来差距并非太大呀

那是因为那多少个模块已经帮咱们将数码帧分析此地的坑都填好了,第二有的我们将尝试本身制作一套简便的服务器端websocket模块

感激次碳酸钴的钻研扶持,自身在那处这一部分只是简短说下,假如对此好玩味好奇的请百度【web才具钻探所】

投机完毕服务器端websocket重要有两点,一个是使用net模块接受数据流,还会有八个是对照官方的帧结构图分析数据,完结这两有的就曾经变成了全套的底部专门的学业

先是给二个顾客端发送websocket握手报文的抓包内容

客户端代码很简短

JavaScript

ws = new WebSocket("ws://127.0.0.1:8888");

1
ws = new WebSocket("ws://127.0.0.1:8888");

篮球世界杯投注盘口 3

服务器端要本着那么些key验证,正是讲key加上一个特定的字符串后做壹遍sha1运算,将其结果调换为base64送重回

JavaScript

var crypto = require('crypto'); var WS = '258EAFA5-E914-47DA-95CA-C5AB0DC85B11'; require('net').createServer(function(o) { var key; o.on('data',function(e) { if(!key) { // 获取发送过来的KEY key = e.toString().match(/Sec-WebSocket-Key: (.+)/)[1]篮球世界杯投注盘口,; // 连接上WS那些字符串,并做二次sha1运算,最终转变到Base64 key = crypto.createHash('sha1').update(key+WS).digest('base64'); // 输出重返给客商端的多少,那一个字段都以必需的 o.write('HTTP/1.1 101 Switching Protocolsrn'); o.write('Upgrade: websocketrn'); o.write('Connection: Upgradern'); // 那一个字段带上服务器管理后的KEY o.write('Sec-WebSocket-Accept: '+key+'rn'); // 输出空行,使HTTP头甘休 o.write('rn'); } }); }).listen(8888);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
var crypto = require('crypto');
var WS = '258EAFA5-E914-47DA-95CA-C5AB0DC85B11';
 
require('net').createServer(function(o) {
var key;
o.on('data',function(e) {
if(!key) {
// 获取发送过来的KEY
key = e.toString().match(/Sec-WebSocket-Key: (.+)/)[1];
// 连接上WS这个字符串,并做一次sha1运算,最后转换成Base64
key = crypto.createHash('sha1').update(key+WS).digest('base64');
// 输出返回给客户端的数据,这些字段都是必须的
o.write('HTTP/1.1 101 Switching Protocolsrn');
o.write('Upgrade: websocketrn');
o.write('Connection: Upgradern');
// 这个字段带上服务器处理后的KEY
o.write('Sec-WebSocket-Accept: '+key+'rn');
// 输出空行,使HTTP头结束
o.write('rn');
}
});
}).listen(8888);

如此握手部分就已经到位了,前边就是数据帧分析与转换的活了

先看下官方提供的帧结构暗暗表示图

篮球世界杯投注盘口 4

一言以蔽之介绍下

FIN为是或不是终止的标志

汉兰达SV为留下空间,0

opcode标记数据类型,是还是不是分片,是或不是二进制解析,心跳包等等

付给一张opcode对应图

篮球世界杯投注盘口 5

MASK是不是接纳掩码

Payload len和前边extend payload length表示数据长度,那么些是最麻烦的

PayloadLen唯有7位,换到无符号整型的话唯有0到127的取值,这么小的数值当然不只怕描述相当大的数目,由此规定当数码长度小于或等于125时候它才作为数据长度的陈说,要是这些值为126,则时候背后的四个字节来积累数据长度,假如为127则用前面多个字节来囤积数据长度

Masking-key掩码

下边贴出深入分析数据帧的代码

JavaScript

function decodeDataFrame(e) { var i = 0, j,s, frame = { FIN: e[i] >> 7, Opcode: e[i++] & 15, Mask: e[i] >> 7, PayloadLength: e[i++] & 0x7F }; if(frame.PayloadLength === 126) { frame.PayloadLength = (e[i++] << 8) + e[i++]; } if(frame.PayloadLength === 127) { i += 4; frame.PayloadLength = (e[i++] << 24) + (e[i++] << 16) + (e[i++] << 8)

  • e[i++]; } if(frame.Mask) { frame.MaskingKey = [e[i++], e[i++], e[i++], e[i++]]; for(j = 0, s = []; j < frame.PayloadLength; j++) { s.push(e[i+j] ^ frame.MaskingKey[j%4]); } } else { s = e.slice(i, i+frame.PayloadLength); } s = new Buffer(s); if(frame.Opcode === 1) { s = s.toString(); } frame.PayloadData = s; return frame; }
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
function decodeDataFrame(e) {
var i = 0,
j,s,
frame = {
FIN: e[i] >> 7,
Opcode: e[i++] & 15,
Mask: e[i] >> 7,
PayloadLength: e[i++] & 0x7F
};
 
if(frame.PayloadLength === 126) {
frame.PayloadLength = (e[i++] << 8) + e[i++];
}
 
if(frame.PayloadLength === 127) {
i += 4;
frame.PayloadLength = (e[i++] << 24) + (e[i++] << 16) + (e[i++] << 8) + e[i++];
}
 
if(frame.Mask) {
frame.MaskingKey = [e[i++], e[i++], e[i++], e[i++]];
 
for(j = 0, s = []; j < frame.PayloadLength; j++) {
s.push(e[i+j] ^ frame.MaskingKey[j%4]);
}
} else {
s = e.slice(i, i+frame.PayloadLength);
}
 
s = new Buffer(s);
 
if(frame.Opcode === 1) {
s = s.toString();
}
 
frame.PayloadData = s;
return frame;
}

接下来是变化数据帧的

JavaScript

function encodeDataFrame(e) { var s = [], o = new Buffer(e.PayloadData), l = o.length; s.push((e.FIN << 7) + e.Opcode); if(l < 126) { s.push(l); } else if(l < 0x10000) { s.push(126, (l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF); } return Buffer.concat([new Buffer(s), o]); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
function encodeDataFrame(e) {
var s = [],
o = new Buffer(e.PayloadData),
l = o.length;
 
s.push((e.FIN << 7) + e.Opcode);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), o]);
}

都以遵照帧结构暗示图上的去处理,在这里处不细讲,文章首要在下有个别,借使对那块感兴趣的话可以运动web本领切磋所~

 

三、websocket传输图片和websocket语音聊天室

正片环节到了,那篇作品最珍视的照旧显得一下websocket的一部分利用境况

1、传输图片

咱俩先研讨传输图片的手续是怎样,首先服务器收到到顾客端央求,然后读取图片文件,将二进制数据转载给客商端,客商端如什么地方理?当然是行使FileReader对象了

先给客商端代码

JavaScript

var ws = new WebSocket("ws://xxx.xxx.xxx.xxx:8888"); ws.onopen = function(){ console.log("握手成功"); }; ws.onmessage = function(e) { var reader = new FileReader(); reader.onload = function(event) { var contents = event.target.result; var a = new Image(); a.src = contents; document.body.appendChild(a); } reader.readAsDataUTiggoL(e.data); };

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var ws = new WebSocket("ws://xxx.xxx.xxx.xxx:8888");
 
ws.onopen = function(){
    console.log("握手成功");
};
 
ws.onmessage = function(e) {
    var reader = new FileReader();
    reader.onload = function(event) {
        var contents = event.target.result;
        var a = new Image();
        a.src = contents;
        document.body.appendChild(a);
    }
    reader.readAsDataURL(e.data);
};

接到到音讯,然后readAsDataU福睿斯L,直接将图纸base64增多到页面中

转到服务器端代码

JavaScript

fs.readdir("skyland", function(err, files) { if(err) { throw err; } for(var i = 0; i < files.length; i++) { fs.readFile('skyland/' + files[i], function(err, data) { if(err) { throw err; } o.write(encodeImgFrame(data)); }); } }); function encodeImgFrame(buf) { var s = [], l = buf.length, ret = []; s.push((1 << 7) + 2); if(l < 126) { s.push(l); } else if(l < 0x10000) { s.push(126, (l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF); } return Buffer.concat([new Buffer(s), buf]); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
fs.readdir("skyland", function(err, files) {
if(err) {
throw err;
}
for(var i = 0; i < files.length; i++) {
fs.readFile('skyland/' + files[i], function(err, data) {
if(err) {
throw err;
}
 
o.write(encodeImgFrame(data));
});
}
});
 
function encodeImgFrame(buf) {
var s = [],
l = buf.length,
ret = [];
 
s.push((1 << 7) + 2);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), buf]);
}

注意s.push((1 << 7) + 2)这一句,这里特别直接把opcode写死了为2,对于Binary Frame,那样顾客端接收到数量是不会尝试进行toString的,不然会报错~

代码很简短,在这里间向大家分享一下websocket传输图片的进程怎么样

测量检验相当多张图片,总共8.24M

平常静态财富服务器需求20s左右(服务器较远)

cdn需要2.8s左右

那大家的websocket情势吧??!

答案是同等要求20s左右,是否很失望……速度便是慢在传输上,并非服务器读取图片,本机上平等的图形财富,1s左右得以成功……那样看来数据流也无能为力冲破间距的限量提升传输速度

下边我们来探访websocket的另三个用法~

 

用websocket搭建语音聊天室

先来收拾一下口音聊天室的成效

客商踏入频道随后从Mike风输入音频,然后发送给后台转载给频道里面包车型地铁其余人,别的人接收到音信举办播报

看起来困难在多少个地点,第二个是音频的输入,第二是摄取到数码流举行播放

先说音频的输入,这里运用了HTML5的getUserMedia方法,不过注意了,以此办法上线是有烂角咀的,最后说,先贴代码

JavaScript

if (navigator.getUserMedia) { navigator.getUserMedia( { audio: true }, function (stream) { var rec = new SRecorder(stream); recorder = rec; }) }

1
2
3
4
5
6
7
8
if (navigator.getUserMedia) {
    navigator.getUserMedia(
        { audio: true },
        function (stream) {
            var rec = new SRecorder(stream);
            recorder = rec;
        })
}

首先个参数是{audio: true},只启用音频,然后创造了一个SRecorder对象,后续的操作基本上都在这里个目的上海展览中心开。此时尽管代码运转在本地的话浏览器应该晋升您是还是不是启用迈克风输入,鲜明之后就开发银行了

接下去大家看下SRecorder构造函数是甚,给出主要的片段

JavaScript

var SRecorder = function(stream) { …… var context = new AudioContext(); var audioInput = context.createMediaStreamSource(stream); var recorder = context.createScriptProcessor(4096, 1, 1); …… }

1
2
3
4
5
6
7
var SRecorder = function(stream) {
    ……
   var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
    ……
}

奥迪oContext是叁个旋律上下文对象,有做过声音过滤管理的同室应该知道“一段音频达到扬声器举办播报在此以前,半路对其进行阻拦,于是我们就获得了点子数据了,那个拦截职业是由window.奥迪(Audi)oContext来做的,大家具备对旋律的操作都基于这些目的”,我们得以因此奥迪oContext创制差异的奥迪(Audi)oNode节点,然后增加滤镜播放非常的声响

录音原理一样,大家也急需走奥迪oContext,然则多了一步对迈克风音频输入的接收上,实际不是像以前管理音频一下用ajax诉求音频的ArrayBuffer对象再decode,迈克风的承受必要用到createMediaStreamSource方法,注意这些参数正是getUserMedia方法第一个参数的参数

再者说createScriptProcessor方法,它官方的演讲是:

Creates a ScriptProcessorNode, which can be used for direct audio processing via JavaScript.

——————

席卷下正是以此点子是采纳JavaScript去管理音频采集操作

追根究底到点子搜聚了!胜利就在前边!

接下去让我们把话筒的输入和节奏搜聚相连起来

JavaScript

audioInput.connect(recorder); recorder.connect(context.destination);

1
2
audioInput.connect(recorder);
recorder.connect(context.destination);

context.destination官方解释如下

The destination property of the AudioContext interface returns an AudioDestinationNoderepresenting the final destination of all audio in the context.

——————

context.destination重临代表在条件中的音频的末段目标地。

好,到了此时,大家还亟需一个监听音频收集的事件

JavaScript

recorder.onaudioprocess = function (e) { audioData.input(e.inputBuffer.getChannelData(0)); }

1
2
3
recorder.onaudioprocess = function (e) {
    audioData.input(e.inputBuffer.getChannelData(0));
}

audioData是贰个目的,这几个是在网络找的,我就加了三个clear方法因为后边会用到,首要有格外encodeWAV方法异常的赞,别人实行了频繁的音频压缩和优化,这几个最后会伴随完整的代码一齐贴出来

此时总体客户步向频道随后从迈克风输入音频环节就曾经变成啦,下面就该是向服务器端发送音频流,稍微有一点蛋疼的来了,刚才我们说了,websocket通过opcode不一样能够象征回去的数据是文本依然二进制数据,而大家onaudioprocess中input进去的是数组,最终播放音响供给的是Blob,{type: ‘audio/wav’}的指标,那样大家就务要求在发送以前将数组转变到WAV的Blob,此时就用到了上面说的encodeWAV方法

服务器就像是很简短,只要转发就行了

地点测验确实能够,唯独天坑来了!将次第跑在服务器上时候调用getUserMedia方法提示作者无法不在一个平安的条件,也正是索要https,那表示ws也必须换来wss……为此服务器代码就未有使用大家友好包裹的握手、剖判和编码了,代码如下

JavaScript

var https = require('https'); var fs = require('fs'); var ws = require('ws'); var userMap = Object.create(null); var options = { key: fs.readFileSync('./privatekey.pem'), cert: fs.readFileSync('./certificate.pem') }; var server = https.createServer(options, function(req, res) { res.writeHead({ 'Content-Type' : 'text/html' }); fs.readFile('./testaudio.html', function(err, data) { if(err) { return ; } res.end(data); }); }); var wss = new ws.Server({server: server}); wss.on('connection', function(o) { o.on('message', function(message) { if(message.indexOf('user') === 0) { var user = message.split(':')[1]; userMap[user] = o; } else { for(var u in userMap) { userMap[u].send(message); } } }); }); server.listen(8888);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
var https = require('https');
var fs = require('fs');
var ws = require('ws');
var userMap = Object.create(null);
var options = {
    key: fs.readFileSync('./privatekey.pem'),
    cert: fs.readFileSync('./certificate.pem')
};
var server = https.createServer(options, function(req, res) {
    res.writeHead({
        'Content-Type' : 'text/html'
    });
 
    fs.readFile('./testaudio.html', function(err, data) {
        if(err) {
            return ;
        }
 
        res.end(data);
    });
});
 
var wss = new ws.Server({server: server});
 
wss.on('connection', function(o) {
    o.on('message', function(message) {
if(message.indexOf('user') === 0) {
    var user = message.split(':')[1];
    userMap[user] = o;
} else {
    for(var u in userMap) {
userMap[u].send(message);
    }
}
    });
});
 
server.listen(8888);

代码依旧很简短的,使用https模块,然后用了开班说的ws模块,userMap是效仿的频道,只兑现转载的着力功效

利用ws模块是因为它特出https完结wss实在是太实惠了,和逻辑代码0冲突

https的搭建在此就不提了,主若是索要私钥、CSCR-V证书具名和评释文件,感兴趣的同桌可以了然下(可是不通晓的话在现网景况也用持续getUserMedia……)

上边是完全的前端代码

JavaScript

var a = document.getElementById('a'); var b = document.getElementById('b'); var c = document.getElementById('c'); navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia; var gRecorder = null; var audio = document.querySelector('audio'); var door = false; var ws = null; b.onclick = function() { if(a.value === '') { alert('请输入顾客名'); return false; } if(!navigator.getUserMedia) { alert('抱歉您的装置无法文音聊天'); return false; } SRecorder.get(function (rec) { gRecorder = rec; }); ws = new WebSocket("wss://x.x.x.x:8888"); ws.onopen = function() { console.log('握手成功'); ws.send('user:' + a.value); }; ws.onmessage = function(e) { receive(e.data); }; document.onkeydown = function(e) { if(e.keyCode === 65) { if(!door) { gRecorder.start(); door = true; } } }; document.onkeyup = function(e) { if(e.keyCode === 65) { if(door) { ws.send(gRecorder.getBlob()); gRecorder.clear(); gRecorder.stop(); door = false; } } } } c.onclick = function() { if(ws) { ws.close(); } } var SRecorder = function(stream) { config = {}; config.sampleBits = config.smapleBits || 8; config.sampleRate = config.sampleRate || (44100 / 6); var context = new 奥迪oContext(); var audioInput = context.createMediaStreamSource(stream); var recorder = context.createScriptProcessor(4096, 1, 1); var audioData = { size: 0 //录音文件长度 , buffer: [] //录音缓存 , inputSampleRate: context.sampleRate //输入采集样品率 , inputSampleBits: 16 //输入采集样品数位 8, 16 , outputSampleRate: config.sampleRate //输出采集样品率 , outut萨姆pleBits: config.sampleBits //输出采样数位 8, 16 , clear: function() { this.buffer = []; this.size = 0; } , input: function (data) { this.buffer.push(new Float32Array(data)); this.size += data.length; } , compress: function () { //合併压缩 //合併 var data = new Float32Array(this.size); var offset = 0; for (var i = 0; i < this.buffer.length; i++) { data.set(this.buffer[i], offset); offset += this.buffer[i].length; } //压缩 var compression = parseInt(this.inputSampleRate / this.outputSampleRate); var length = data.length / compression; var result = new Float32Array(length); var index = 0, j = 0; while (index < length) { result[index] = data[j]; j += compression; index++; } return result; } , encodeWAV: function () { var sampleRate = Math.min(this.inputSampleRate, this.outputSampleRate); var sampleBits = Math.min(this.inputSampleBits, this.oututSampleBits); var bytes = this.compress(); var dataLength = bytes.length * (sampleBits / 8); var buffer = new ArrayBuffer(44 + dataLength); var data = new DataView(buffer); var channelCount = 1;//单声道 var offset = 0; var writeString = function (str) { for (var i = 0; i < str.length; i++) { data.setUint8(offset + i, str.charCodeAt(i)); } }; // 能源交换文件标记符 writeString('LX570IFF'); offset += 4; // 下个地方最早到文件尾总字节数,即文件大小-8 data.setUint32(offset, 36 + dataLength, true); offset += 4; // WAV文件注脚 writeString('WAVE'); offset += 4; // 波形格式标记 writeString('fmt '); offset += 4; // 过滤字节,日常为 0x10 = 16 data.setUint32(offset, 16, true); offset += 4; // 格式种类 (PCM形式采集样品数据) data.setUint16(offset, 1, true); offset += 2; // 通道数 data.setUint16(offset, channelCount, true); offset += 2; // 采集样品率,每秒样本数,表示种种通道的播放速度 data.setUint32(offset, sampleRate, true); offset += 4; // 波形数据传输率 (每秒平均字节数) 单声道×每秒数据位数×每样本数据位/8 data.setUint32(offset, channelCount * sampleRate * (sampleBits / 8), true); offset += 4; // 快数据调节数 采集样品贰次占用字节数 单声道×每样本的数额位数/8 data.setUint16(offset, channelCount * (sampleBits / 8), true); offset += 2; // 每样本数量位数 data.setUint16(offset, sampleBits, true); offset += 2; // 数据标记符 writeString('data'); offset += 4; // 采集样品数据总的数量,即数据总大小-44 data.setUint32(offset, dataLength, true); offset += 4; // 写入采样数据 if (sampleBits === 8) { for (var i = 0; i < bytes.length; i++, offset++) { var s = Math.max(-1, Math.min(1, bytes[i])); var val = s < 0 ? s * 0x8000 : s * 0x7FFF; val = parseInt(255 / (65535 / (val + 32768))); data.setInt8(offset, val, true); } } else { for (var i = 0; i < bytes.length; i++, offset += 2) { var s = Math.max(-1, Math.min(1, bytes[i])); data.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true); } } return new Blob([data], { type: 'audio/wav' }); } }; this.start = function () { audioInput.connect(recorder); recorder.connect(context.destination); } this.stop = function () { recorder.disconnect(); } this.getBlob = function () { return audioData.encodeWAV(); } this.clear = function() { audioData.clear(); } recorder.onaudioprocess = function (e) { audioData.input(e.inputBuffer.getChannelData(0)); } }; SRecorder.get = function (callback) { if (callback) { if (navigator.getUserMedia) { navigator.getUserMedia( { audio: true }, function (stream) { var rec = new SRecorder(stream); callback(rec); }) } } } function receive(e) { audio.src = window.URL.createObjectURL(e); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
var a = document.getElementById('a');
var b = document.getElementById('b');
var c = document.getElementById('c');
 
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia;
 
var gRecorder = null;
var audio = document.querySelector('audio');
var door = false;
var ws = null;
 
b.onclick = function() {
    if(a.value === '') {
        alert('请输入用户名');
        return false;
    }
    if(!navigator.getUserMedia) {
        alert('抱歉您的设备无法语音聊天');
        return false;
    }
 
    SRecorder.get(function (rec) {
        gRecorder = rec;
    });
 
    ws = new WebSocket("wss://x.x.x.x:8888");
 
    ws.onopen = function() {
        console.log('握手成功');
        ws.send('user:' + a.value);
    };
 
    ws.onmessage = function(e) {
        receive(e.data);
    };
 
    document.onkeydown = function(e) {
        if(e.keyCode === 65) {
            if(!door) {
                gRecorder.start();
                door = true;
            }
        }
    };
 
    document.onkeyup = function(e) {
        if(e.keyCode === 65) {
            if(door) {
                ws.send(gRecorder.getBlob());
                gRecorder.clear();
                gRecorder.stop();
                door = false;
            }
        }
    }
}
 
c.onclick = function() {
    if(ws) {
        ws.close();
    }
}
 
var SRecorder = function(stream) {
    config = {};
 
    config.sampleBits = config.smapleBits || 8;
    config.sampleRate = config.sampleRate || (44100 / 6);
 
    var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
 
    var audioData = {
        size: 0          //录音文件长度
        , buffer: []     //录音缓存
        , inputSampleRate: context.sampleRate    //输入采样率
        , inputSampleBits: 16       //输入采样数位 8, 16
        , outputSampleRate: config.sampleRate    //输出采样率
        , oututSampleBits: config.sampleBits       //输出采样数位 8, 16
        , clear: function() {
            this.buffer = [];
            this.size = 0;
        }
        , input: function (data) {
            this.buffer.push(new Float32Array(data));
            this.size += data.length;
        }
        , compress: function () { //合并压缩
            //合并
            var data = new Float32Array(this.size);
            var offset = 0;
            for (var i = 0; i < this.buffer.length; i++) {
                data.set(this.buffer[i], offset);
                offset += this.buffer[i].length;
            }
            //压缩
            var compression = parseInt(this.inputSampleRate / this.outputSampleRate);
            var length = data.length / compression;
            var result = new Float32Array(length);
            var index = 0, j = 0;
            while (index < length) {
                result[index] = data[j];
                j += compression;
                index++;
            }
            return result;
        }
        , encodeWAV: function () {
            var sampleRate = Math.min(this.inputSampleRate, this.outputSampleRate);
            var sampleBits = Math.min(this.inputSampleBits, this.oututSampleBits);
            var bytes = this.compress();
            var dataLength = bytes.length * (sampleBits / 8);
            var buffer = new ArrayBuffer(44 + dataLength);
            var data = new DataView(buffer);
 
            var channelCount = 1;//单声道
            var offset = 0;
 
            var writeString = function (str) {
                for (var i = 0; i < str.length; i++) {
                    data.setUint8(offset + i, str.charCodeAt(i));
                }
            };
 
            // 资源交换文件标识符
            writeString('RIFF'); offset += 4;
            // 下个地址开始到文件尾总字节数,即文件大小-8
            data.setUint32(offset, 36 + dataLength, true); offset += 4;
            // WAV文件标志
            writeString('WAVE'); offset += 4;
            // 波形格式标志
            writeString('fmt '); offset += 4;
            // 过滤字节,一般为 0x10 = 16
            data.setUint32(offset, 16, true); offset += 4;
            // 格式类别 (PCM形式采样数据)
            data.setUint16(offset, 1, true); offset += 2;
            // 通道数
            data.setUint16(offset, channelCount, true); offset += 2;
            // 采样率,每秒样本数,表示每个通道的播放速度
            data.setUint32(offset, sampleRate, true); offset += 4;
            // 波形数据传输率 (每秒平均字节数) 单声道×每秒数据位数×每样本数据位/8
            data.setUint32(offset, channelCount * sampleRate * (sampleBits / 8), true); offset += 4;
            // 快数据调整数 采样一次占用字节数 单声道×每样本的数据位数/8
            data.setUint16(offset, channelCount * (sampleBits / 8), true); offset += 2;
            // 每样本数据位数
            data.setUint16(offset, sampleBits, true); offset += 2;
            // 数据标识符
            writeString('data'); offset += 4;
            // 采样数据总数,即数据总大小-44
            data.setUint32(offset, dataLength, true); offset += 4;
            // 写入采样数据
            if (sampleBits === 8) {
                for (var i = 0; i < bytes.length; i++, offset++) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    var val = s < 0 ? s * 0x8000 : s * 0x7FFF;
                    val = parseInt(255 / (65535 / (val + 32768)));
                    data.setInt8(offset, val, true);
                }
            } else {
                for (var i = 0; i < bytes.length; i++, offset += 2) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    data.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true);
                }
            }
 
            return new Blob([data], { type: 'audio/wav' });
        }
    };
 
    this.start = function () {
        audioInput.connect(recorder);
        recorder.connect(context.destination);
    }
 
    this.stop = function () {
        recorder.disconnect();
    }
 
    this.getBlob = function () {
        return audioData.encodeWAV();
    }
 
    this.clear = function() {
        audioData.clear();
    }
 
    recorder.onaudioprocess = function (e) {
        audioData.input(e.inputBuffer.getChannelData(0));
    }
};
 
SRecorder.get = function (callback) {
    if (callback) {
        if (navigator.getUserMedia) {
            navigator.getUserMedia(
                { audio: true },
                function (stream) {
                    var rec = new SRecorder(stream);
                    callback(rec);
                })
        }
    }
}
 
function receive(e) {
    audio.src = window.URL.createObjectURL(e);
}

注意:按住a键说话,放开a键发送

协和有品味不开关实时对讲,通过setInterval发送,但意识杂音有一点点重,效果倒霉,那一个须要encodeWAV再一层的卷入,多去除景况杂音的成效,本人挑选了一发方便人民群众的按钮说话的形式

 

那篇文章里第一展望了websocket的前途,然后依据正规大家相濡以沫尝尝深入分析和转换数据帧,对websocket有了越来越深一步的打听

终极通过多少个demo见到了websocket的潜质,关于语音聊天室的demo涉及的较广,没有接触过奥迪(Audi)oContext对象的同室最佳先精通下奥迪oContext

小谈到这里就终止啦~有哪些主见和难题应接我们提议来一同钻探研究~

 

1 赞 11 收藏 3 评论

篮球世界杯投注盘口 6

本文由篮球世界杯投注盘口_篮球世界杯即时盘口发布于html入门,转载请注明出处:图片的能力

关键词:

上一篇:入门教程,10分钟玩转PWA

下一篇:没有了

最火资讯