淘宝客手机网站,网站开发与编程的区别,成都发现6例阳性,wordpress丢失网络连接1 基础工作
打开cmd 输入 conda env list 输入 conda activate py38 查看 nvidia-smi 查看 nvcc#xff0c;如下图所示 cuda为11.7 #xff0c;为确认可以查看program files 下面的cuda 安装#xff0c;看到11.7 就行了#xff0c;读者可以自行确认自己的版本。 查看nvid…1 基础工作
打开cmd 输入 conda env list 输入 conda activate py38 查看 nvidia-smi 查看 nvcc如下图所示 cuda为11.7 为确认可以查看program files 下面的cuda 安装看到11.7 就行了读者可以自行确认自己的版本。 查看nvidia-smi 打开yolov8 track以后发现速度超级慢 打开cpu占用率奇高gpu为零 打开cmd import torch torch.cuda.is_available() 为false 确认pytorch安装的为cpu版本重新安装pytorch cuda 版本去pytorch网站点开后复制安装命令把cuda版本改成自己本机的版本。 输入命令 python yolo\v8\detect\detect_and_trk.py modelyolov8s.pt source“d:\test.mp4” showTrue gpu占用率上升到40% 左右。
2 输出到web
接下来我们要做的工作为rtsp server 编码上传 为了让其他机器得到结果我们必须编程并且得到图像结果让后使用实时传输协议直接转成rtp 流让本机成为rtsp server使用vlc 等工具可以直接拉取到编码后的流。 修改代码上传结果先把命令修改成为 python yolo\v8\detect\detect_and_trk.py modelyolov8s.pt source“d:\test.mp4” 然后在代码里面自己用opencv 来show也就是增加两行代码 cv2.imshow(qbshow,im0)cv2.waitKey(3)没问题show 出来了也就是im0 是我们在增加了选框的地方在show 代码的基础上把图片发送出去就行我们选择使用websocket 发送或者socket 直接发送都行这样我们需要一个客户端还是服务端的选择我们选择让python成为服务端目的是为了不让python需要断线重连python 成为服务端有很多好处 1 是可以直接给web传送图片 2 是可以直接给 def write_results(self, idx, preds, batch):p, im, im0 batchlog_string if len(im.shape) 3:im im[None] # expand for batch dimself.seen 1im0 im0.copy()if self.webcam: # batch_size 1log_string f{idx}: frame self.dataset.countelse:frame getattr(self.dataset, frame, 0)# trackerself.data_path psave_path str(self.save_dir / p.name) # im.jpgself.txt_path str(self.save_dir / labels / p.stem) ( if self.dataset.mode image else f_{frame})log_string %gx%g % im.shape[2:] # print stringself.annotator self.get_annotator(im0)det preds[idx]self.all_outputs.append(det)if len(det) 0:return log_stringfor c in det[:, 5].unique():n (det[:, 5] c).sum() # detections per classlog_string f{n} {self.model.names[int(c)]}{s * (n 1)}, # #..................USE TRACK FUNCTION....................dets_to_sort np.empty((0,6))for x1,y1,x2,y2,conf,detclass in det.cpu().detach().numpy():dets_to_sort np.vstack((dets_to_sort, np.array([x1, y1, x2, y2, conf, detclass])))tracked_dets tracker.update(dets_to_sort)tracks tracker.getTrackers()for track in tracks:[cv2.line(im0, (int(track.centroidarr[i][0]),int(track.centroidarr[i][1])), (int(track.centroidarr[i1][0]),int(track.centroidarr[i1][1])),rand_color_list[track.id], thickness3) for i,_ in enumerate(track.centroidarr) if i len(track.centroidarr)-1 ] if len(tracked_dets)0:bbox_xyxy tracked_dets[:,:4]identities tracked_dets[:, 8]categories tracked_dets[:, 4]draw_boxes(im0, bbox_xyxy, identities, categories, self.model.names)gn torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwhcv2.imshow(qbshow,im0)cv2.waitKey(3)return log_string2.1 web中显示
在python中加上websocket server的代码以下是一个示例代码读者可以自己测试但是要应用到python 的图像处理server中我们要做一些修改
import asyncio
import threading
import websockets
import time
CONNECTIONS set()async def server_recv(websocket):while True:try:recv_text await websocket.recv()print(recv:, recv_text)except websockets.ConnectionClosed as e:# 客户端关闭连接跳出对客户端的读取结束函数print(e.code)await asyncio.sleep(0.01)breakasync def server_hands(websocket):CONNECTIONS.add(websocket)print(CONNECTIONS)try:await websocket.wait_closed()finally:CONNECTIONS.remove(websocket) def message_all(message):websockets.broadcast(CONNECTIONS, message) async def handler(websocket, path):# 处理新的 WebSocket 连接print(New WebSocket route is ,path)try:await server_hands(websocket) # 握手加入队列await server_recv(websocket) # 接收客户端消息并处理except websockets.exceptions.ConnectionClosedError as e:print(fConnection closed unexpectedly: {e})finally:pass# 处理完毕关闭 WebSocket 连接print(WebSocket connection closed)async def sockrun():async with websockets.serve(handler, , 9090):await asyncio.Future() # run foreverdef main_thread(): print(main)asyncio.run(sockrun())2.2 修改
修改的地方为我们不能让websocket 阻塞主线程虽然websocket为协程处理但是依然要放到线程里面
def predict(cfg):init_tracker()random_color_list()cfg.model cfg.model or yolov8n.ptcfg.imgsz check_imgsz(cfg.imgsz, min_dim2) # check image sizecfg.source cfg.source if cfg.source is not None else ROOT / assetspredictor DetectionPredictor(cfg)predictor()if __name__ __main__:thread threading.Thread(targetmain_thread)thread.start()predict()值得注意的地方是 很多人喜欢把编码做成base64 这样没有必要 服务程序里面把二进制编码成base64 消耗了cpu 然后发送还大了很多实际上我们直接发送二进制就行了发送时 cv2.waitKey(3)_, encimg cv2.imencode(.jpg, im0)bys np.array(encimg).tobytes()message_all(bys)web端测试代码
!DOCTYPE html
html langenheadmeta charsetUTF-8meta http-equivX-UA-Compatible contentIEedgemeta nameviewport contentwidthdevice-width, initial-scale1.0titlewebsocket/title/head
bodydivbutton onclickconnecteClient()开始接收/button/divbrdivimg src idpython_img altwebsock styletext-align:left; width: 640px; height: 360px; /divscriptfunction connecteClient() {var ws new WebSocket(ws://127.0.0.1:9090);ws.onopen function () {console.log(WebSocket 连接成功);};ws.onmessage function (evt) {var received_msg evt.data;blobToDataURI(received_msg, function (result) {document.getElementById(python_img).src result;})};ws.onclose function () {console.log(连接关闭...);};}function blobToDataURI(blob, callback) {var reader new FileReader();reader.readAsDataURL(blob);reader.onload function (e) {callback(e.target.result);}}/script
/body
/html其中接收到二进制完成以后直接显示不用服务端编码成base64下图表明web端可以正常拿到jpg图片显示 其中message_all 是广播只要是websocket client 我们就给他发一份这样做不但可以发送给web也可以发送给rtsp server我们让rtsp server 链接上来取到jpg后解码成为rgb24然后编码成为h264让vlc 可以链接rtsp server取到rtsp 流
3 rtsp server 编码h264 输出
我们的rtsp server 端使用c 编写,为了加快速度使用了内存共享而没有使用websocket client当然是可以的考虑到这里要解码又要编码输出还有可能需要使用硬件编码为了提高效率就这么做了而且还简单缺点就是这样多了一个问题rtspserve只能启动在本机上。先编写一个h264Frame h265 是一样的道理后面再更改暂时运行在windows上等完成了再修改。注意以下代码只能运行在windows上
class H264Frame :public TThreadRunable
{const wchar_t* sharedMemoryMutex Lshared_memory_mutex;const wchar_t* sharedMemoryName Lshared_memory;LPVOID v_lpBase NULL;HANDLE v_hMapFile NULL;HANDLE m_shareMutex NULL;std::shared_ptrxop::RtspServer v_server;std::string v_livename;std::wstring v_pathname;std::wstring v_pathmutex_name;int v_fps 10;Encoder v_encoder;xop::MediaSessionId v_id;int v_init 0;int v_quality 28;
public:H264Frame(std::shared_ptrxop::RtspServer s, const char* livename, const wchar_t* pathname, xop::MediaSessionId session_id){v_server s;v_livename livename;v_pathname pathname;if (pathname ! NULL){v_pathmutex_name pathname;v_pathmutex_name L_mutex;}v_id session_id;}~H264Frame() {}bool Open(){if (v_pathname.empty())v_hMapFile OpenFileMappingW(FILE_MAP_ALL_ACCESS, NULL, sharedMemoryName);elsev_hMapFile OpenFileMappingW(FILE_MAP_ALL_ACCESS, NULL, v_pathname.c_str());if (v_hMapFile NULL){//printf( Waiting shared memory creation......\n);return false;}return true;}void Close(){if (m_shareMutex ! NULL)ReleaseMutex(m_shareMutex);if (v_hMapFile ! NULL)CloseHandle(v_hMapFile);}bool IsOpened() const{return (v_hMapFile ! NULL);}void sendFrame(uint8_t* data ,int size, int sessionid){xop::AVFrame videoFrame { 0 };videoFrame.type 0;videoFrame.size size;// sizeofdata;// frame_size;videoFrame.timestamp xop::H264Source::GetTimestamp();//videoFrame.notcopy data;videoFrame.buffer.reset(new uint8_t[videoFrame.size]);std::memcpy(videoFrame.buffer.get(), data, videoFrame.size);v_server-PushFrame(sessionid, xop::channel_0, videoFrame);}int ReadFrame(int sessionid){if (v_hMapFile){v_lpBase MapViewOfFile(v_hMapFile, FILE_MAP_ALL_ACCESS, 0, 0, 0);unsigned char* pMemory (unsigned char*)v_lpBase;int w 0, h 0;int frame_size 0;std::memcpy(w, pMemory, 4);std::memcpy(h, pMemory 4, 4);std::memcpy(frame_size, pMemory 8, 4);//printf(read the width %d height %d framesize %d\n, imageWidth,imageHeight, frame_size);uint8_t* rgb pMemory 12;if (v_init 0){v_encoder.Encoder_Open(v_fps, w, h, w, h, v_quality);v_init 1;}v_encoder.RGB2YUV(rgb, w, h);AVPacket * pkt v_encoder.EncodeVideo();//while (!*framebuf);if (pkt!NULL) {uint8_t* framebuf pkt-data;frame_size pkt-size;uint8_t* spsStart NULL;int spsLen 0;uint8_t* ppsStart NULL;int ppsLen 0;uint8_t* seStart NULL;int seLen 0;uint8_t* frameStart 0;//非关键帧地址或者关键帧地址int frameLen 0;AnalyseNalu_no0x((const uint8_t*)framebuf, frame_size, spsStart, spsLen, ppsStart, ppsLen, seStart, seLen, frameStart, frameLen);if (spsStart ! NULL ppsStart!NULL){sendFrame(spsStart, spsLen, sessionid);sendFrame(ppsStart, ppsLen, sessionid);}if (frameStart ! NULL){sendFrame(frameStart, frameLen, sessionid);}av_packet_free(pkt);}UnmapViewOfFile(pMemory);}return -1;}void Run(){m_shareMutex CreateMutexW(NULL, false, sharedMemoryMutex/*v_pathmutex_name.c_str()*/); v_encoder.Encoder_Open(10, 4096, 1080, 4096, 1080, 27);while (1){if (Open() true){break;}else{printf(can not open share mem error\n);Sleep(3000);}}printf(wait ok\n);//检查是否改变参数//重新开始int64_t m_start_clock 0;float delay 1000000.0f / (float)(v_fps);//微妙//合并的rgbuint8_t * rgb NULL;uint8_t * cam NULL;int64_t start_timestamp av_gettime_relative();// GetTimestamp32();while (1){if (IsStop())break;//获取一帧//rgb if (v_hMapFile){//printf(start to lock sharemutex\n);WaitForSingleObject(m_shareMutex, INFINITE);//printf(read frame\n);ReadFrame(v_id);ReleaseMutex(m_shareMutex);}else{printf(Shared memory handle error\n);break;}int64_t tnow av_gettime_relative();// GetTimestamp32();int64_t total tnow - start_timestamp; //總共花費的時間int64_t fix_consume m_start_clock * delay;if (total fix_consume) {int64_t diff fix_consume - total;if (diff 0) { //相差5000微妙以上std::this_thread::sleep_for(std::chrono::microseconds(diff));}}}if (v_init 0){v_encoder.Encoder_Close();v_init 1;}}};int main(int argc, char **argv)
{ std::string suffix live;std::string ip 127.0.0.1;std::string port 8554;std::string rtsp_url rtsp:// ip : port / suffix;std::shared_ptrxop::EventLoop event_loop(new xop::EventLoop());std::shared_ptrxop::RtspServer server xop::RtspServer::Create(event_loop.get());if (!server-Start(0.0.0.0, atoi(port.c_str()))) {printf(RTSP Server listen on %s failed.\n, port.c_str());return 0;}#ifdef AUTH_CONFIGserver-SetAuthConfig(-_-, admin, 12345);
#endifxop::MediaSession *session xop::MediaSession::CreateNew(live); session-AddSource(xop::channel_0, xop::H264Source::CreateNew()); //session-StartMulticast(); session-AddNotifyConnectedCallback([] (xop::MediaSessionId sessionId, std::string peer_ip, uint16_t peer_port){printf(RTSP client connect, ip%s, port%hu \n, peer_ip.c_str(), peer_port);});session-AddNotifyDisconnectedCallback([](xop::MediaSessionId sessionId, std::string peer_ip, uint16_t peer_port) {printf(RTSP client disconnect, ip%s, port%hu \n, peer_ip.c_str(), peer_port);});xop::MediaSessionId session_id server-AddSession(session);H264Frame h264frame(server, suffix.c_str(), Lshared_memory,session_id);//std::thread t1(SendFrameThread, server.get(), session_id, h264_file);//t1.detach(); std::cout Play URL: rtsp_url std::endl;h264frame.Start();while (1) {xop::Timer::Sleep(100);}h264frame.Join();getchar();return 0;
}完成以后打开python 服务端打开rtsp服务端最后打开vlc查看 这样就完成了python端使用pytorch yolo 等等共享给c 的内存
其中python端的内存共享示例使用如下
import mmap
import contextlib
import timewhile True:with contextlib.closing(mmap.mmap(-1, 25, tagnametest, accessmmap.ACCESS_READ)) as m:s m.read(1024)#.replace(\x00, )print(s)time.sleep(1)读者可自行完成剩余的代码