<output id="qn6qe"></output>

    1. <output id="qn6qe"><tt id="qn6qe"></tt></output>
    2. <strike id="qn6qe"></strike>

      亚洲 日本 欧洲 欧美 视频,日韩中文字幕有码av,一本一道av中文字幕无码,国产线播放免费人成视频播放,人妻少妇偷人无码视频,日夜啪啪一区二区三区,国产尤物精品自在拍视频首页,久热这里只有精品12

      【OpenVINO?】在 C# 中使用OpenVINO? 部署PP-YOLOE實現物體檢測

      ?前言

      OpenVINO? C# API 是一個 OpenVINO? 的 .Net wrapper,應用最新的 OpenVINO? 庫開發,通過 OpenVINO? C API 實現 .Net 對 OpenVINO? Runtime 調用,使用習慣與 OpenVINO? C++ API 一致。OpenVINO? C# API 由于是基于 OpenVINO? 開發,所支持的平臺與 OpenVINO? 完全一致,具體信息可以參考 OpenVINO?。通過使用 OpenVINO? C# API,可以在 .NET、.NET Framework等框架下使用 C# 語言實現深度學習模型在指定平臺推理加速。

      OpenVINO? C# API 項目鏈接為:

      https://github.com/guojin-yan/OpenVINO-CSharp-API.git
      

      項目源碼鏈接為:

      https://github.com/guojin-yan/OpenVINO-CSharp-API-Samples.git
      

      1. 簡介

      ?  PP-YOLOE是基于PP-YOLOv2的優秀單級無錨模型,超越了各種流行的YOLO模型。PP-YOLOE有一系列型號,命名為s/m/l/x,通過寬度乘數和深度乘數進行配置。PP-YOLOE避免使用特殊的運算符,如可變形卷積或矩陣NMS,以便友好地部署在各種硬件上。 在本文中,我們將使用OpenVINO? C# API 部署 PP-YOLOE實現物體檢測。

      2. 項目環境與依賴

      ? 該項目中所需依賴已經支持通過NuGet Package進行安裝,在該項目中,需要安裝以下NuGet Package:

      • OpenVINO C# API NuGet Package:
      OpenVINO.CSharp.API
      OpenVINO.runtime.win
      OpenVINO.CSharp.API.Extensions
      OpenVINO.CSharp.API.Extensions.OpenCvSharp
      
      • OpenCvSharp NuGet Package:
      OpenCvSharp4
      OpenCvSharp4.Extensions
      OpenCvSharp4.runtime.win
      

      3. 項目輸出

      ? 項目使用的是控制臺輸出,運行后輸出如下所示:

      <00:00:00> Sending http request to https://github.com/guojin-yan/OpenVINO-CSharp-API-Samples/releases/download/Model/ppyoloe_plus_crn_l_80e_coco.tar.
      <00:00:02> Http Response Accquired.
      <00:00:02> Total download length is 199.68 Mb.
      <00:00:02> Download Started.
      <00:00:02> File created.
      <00:02:03> Downloading: [■■■■■■■■■■] 100% <00:02:03 1.81 Mb/s> 199.68 Mb/199.68 Mb downloaded.
      <00:02:03> File Downloaded, saved in E:\GitSpace\OpenVINO-CSharp-API-Samples\model_samples\ppyoloe\ppyoloe_opencvsharp\bin\Release\net6.0\model\ppyoloe_plus_crn_l_80e_coco.tar.
      <00:00:00> Sending http request to https://github.com/guojin-yan/OpenVINO-CSharp-API-Samples/releases/download/Image/test_det_02.jpg.
      <00:00:02> Http Response Accquired.
      <00:00:02> Total download length is 0.16 Mb.
      <00:00:02> Download Started.
      <00:00:02> File created.
      <00:00:02> Downloading: [■■■■■■■■■■] 100% <00:00:02 0.06 Mb/s> 0.16 Mb/0.16 Mb downloaded.
      <00:00:02> File Downloaded, saved in E:\GitSpace\OpenVINO-CSharp-API-Samples\model_samples\ppyoloe\ppyoloe_opencvsharp\bin\Release\net6.0\model\test_image.jpg.
      [ INFO ] Inference device: CPU
      [ INFO ] Start RT-DETR model inference.
      [ INFO ] 1. Initialize OpenVINO Runtime Core success, time spend: 4.5204ms.
      [ INFO ] 2. Read inference model success, time spend: 228.4451ms.
      [ INFO ] Inference Model
      [ INFO ]   Model name: Model0
      [ INFO ]   Input:
      [ INFO ]      name: scale_factor
      [ INFO ]      type: float
      [ INFO ]      shape: Shape : {?,2}
      [ INFO ]      name: image
      [ INFO ]      type: float
      [ INFO ]      shape: Shape : {?,3,640,640}
      [ INFO ]   Output:
      [ INFO ]      name: multiclass_nms3_0.tmp_0
      [ INFO ]      type: float
      [ INFO ]      shape: Shape : {?,6}
      [ INFO ]      name: multiclass_nms3_0.tmp_2
      [ INFO ]      type: int32_t
      [ INFO ]      shape: Shape : {?}
      [ INFO ] 3. Loading a model to the device success, time spend:501.0716ms.
      [ INFO ] 4. Create an infer request success, time spend:0.2663ms.
      [ INFO ] 5. Process input images success, time spend:30.1001ms.
      [ INFO ] 6. Set up input data success, time spend:2.3631ms.
      [ INFO ] 7. Do inference synchronously success, time spend:286.1085ms.
      [ INFO ] 8. Get infer result data success, time spend:0.5189ms.
      [ INFO ] 9. Process reault  success, time spend:0.4425ms.
      [ INFO ] The result save to E:\GitSpace\OpenVINO-CSharp-API-Samples\model_samples\ppyoloe\ppyoloe_opencvsharp\bin\Release\net6.0\model\test_image_result.jpg
      

      ? 圖像預測結果如下圖所示:

      image-20240505205735624

      4. 代碼展示

      ? 以下為嘛中所使用的命名空間代碼:

      using OpenCvSharp.Dnn;
      using OpenCvSharp;
      using OpenVinoSharp;
      using OpenVinoSharp.Extensions;
      using OpenVinoSharp.Extensions.utility;
      using System.Runtime.InteropServices;
      using OpenVinoSharp.preprocess;
      using OpenVinoSharp.Extensions.model;
      using OpenVinoSharp.Extensions.result;
      using OpenVinoSharp.Extensions.process;
      
      namespace ppyoloe_opencvsharp
      {
          internal class Program
          {  
          	....
          }
      }
      

      ? 下面為定義的模型預測代碼:

      • 一般預測流程:
      static void ppyoloe_det(string model_path, string image_path, string device)
      {
          // -------- Step 1. Initialize OpenVINO Runtime Core --------
          Core core = new Core();
          // -------- Step 2. Read inference model --------
          Model model = core.read_model(model_path);
          OvExtensions.printf_model_info(model);
          // -------- Step 3. Loading a model to the device --------
          CompiledModel compiled_model = core.compile_model(model, device);
          // -------- Step 4. Create an infer request --------
          InferRequest infer_request = compiled_model.create_infer_request();
          // -------- Step 5. Process input images --------
          Mat image = new Mat(image_path); // Read image by opencvsharp
          float[] factor = new float[] { 640.0f / (float)image.Rows, 640.0f / (float)image.Cols };
          float[] im_shape = new float[] { 640.0f, 640.0f };
          Mat input_mat = CvDnn.BlobFromImage(image, 1.0 / 255.0, new OpenCvSharp.Size(640, 640), 0, true, false);
          float[] input_data = new float[640 * 640 * 3];
          Marshal.Copy(input_mat.Ptr(0), input_data, 0, input_data.Length);
          // -------- Step 6. Set up input data --------
          Tensor input_tensor_data = infer_request.get_tensor("image");
          input_tensor_data.set_shape(new Shape(1, 3, 640, 640));
          input_tensor_data.set_data<float>(input_data);
          Tensor input_tensor_factor = infer_request.get_tensor("scale_factor");
          input_tensor_factor.set_shape(new Shape(1, 2));
          input_tensor_factor.set_data<float>(factor);
          // -------- Step 7. Do inference synchronously --------
          infer_request.infer();
          // -------- Step 8. Get infer result data --------
          Tensor output_tensor = infer_request.get_output_tensor(0);
          int output_length = (int)output_tensor.get_size();
          float[] output_data = output_tensor.get_data<float>(output_length);
          // -------- Step 9. Process reault  --------
          List<Rect> position_boxes = new List<Rect>();
          List<int> class_ids = new List<int>();
          List<float> confidences = new List<float>();
          for (int i = 0; i < 300; ++i)
          {
              if (output_data[6 * i + 1] > 0.5)
              {
                  class_ids.Add((int)output_data[6 * i]);
                  confidences.Add(output_data[6 * i + 1]);
                  position_boxes.Add(new Rect((int)output_data[6 * i + 2], (int)output_data[6 * i + 3],
                      (int)(output_data[6 * i + 4] - output_data[6 * i + 2]),
                      (int)(output_data[6 * i + 5] - output_data[6 * i + 3])));
              }
          }
          for (int index = 0; index < class_ids.Count; index++)
          {
              Cv2.Rectangle(image, position_boxes[index], new Scalar(0, 0, 255), 2, LineTypes.Link8);
              Cv2.Rectangle(image, new OpenCvSharp.Point(position_boxes[index].TopLeft.X, position_boxes[index].TopLeft.Y + 30),
                  new OpenCvSharp.Point(position_boxes[index].BottomRight.X, position_boxes[index].TopLeft.Y), new Scalar(0, 255, 255), -1);
              Cv2.PutText(image, class_ids[index] + "-" + confidences[index].ToString("0.00"),
                  new OpenCvSharp.Point(position_boxes[index].X, position_boxes[index].Y + 25),
                  HersheyFonts.HersheySimplex, 0.8, new Scalar(0, 0, 0), 2);
          }
          string output_path = Path.Combine(Path.GetDirectoryName(Path.GetFullPath(image_path)),
              Path.GetFileNameWithoutExtension(image_path) + "_result.jpg");
          Cv2.ImWrite(output_path, image);
          Slog.INFO("The result save to " + output_path);
          Cv2.ImShow("Result", image);
          Cv2.WaitKey(0);
      }
      
      • 編譯預處理步驟到模型方式推理模型:
      static void ppyoloe_det_with_process(string model_path, string image_path, string device)
      {
          // -------- Step 1. Initialize OpenVINO Runtime Core --------
          Core core = new Core();
          // -------- Step 2. Read inference model --------
          Model model = core.read_model(model_path);
          OvExtensions.printf_model_info(model);
          PrePostProcessor processor = new PrePostProcessor(model);
          Tensor input_tensor_pro = new Tensor(new OvType(ElementType.U8), new Shape(1, 640, 640, 3));
          InputInfo input_info = processor.input("image");
          InputTensorInfo input_tensor_info = input_info.tensor();
          input_tensor_info.set_from(input_tensor_pro).set_layout(new Layout("NHWC")).set_color_format(ColorFormat.BGR);
          PreProcessSteps process_steps = input_info.preprocess();
          process_steps.convert_color(ColorFormat.RGB).resize(ResizeAlgorithm.RESIZE_LINEAR)
              .convert_element_type(new OvType(ElementType.F32)).scale(255.0f).convert_layout(new Layout("NCHW"));
          Model new_model = processor.build();
          // -------- Step 3. Loading a model to the device --------
          CompiledModel compiled_model = core.compile_model(new_model, device);
          // -------- Step 4. Create an infer request --------
          InferRequest infer_request = compiled_model.create_infer_request();
          // -------- Step 5. Process input images --------
          Mat image = new Mat(image_path); // Read image by opencvsharp
          Mat input_image = new Mat();
          Cv2.Resize(image, input_image, new OpenCvSharp.Size(640, 640));
          float[] factor = new float[] { 640.0f / (float)image.Rows, 640.0f / (float)image.Cols };
          float[] im_shape = new float[] { 640.0f, 640.0f };
          // -------- Step 6. Set up input data --------
          Tensor input_tensor_data = infer_request.get_tensor("image");
          byte[] input_data = new byte[3 * 640 * 640];
          Marshal.Copy(input_image.Ptr(0), input_data, 0, input_data.Length);
          IntPtr destination = input_tensor_data.data();
          Marshal.Copy(input_data, 0, destination, input_data.Length);
          Tensor input_tensor_factor = infer_request.get_tensor("scale_factor");
          input_tensor_factor.set_shape(new Shape(1, 2));
          input_tensor_factor.set_data<float>(factor);
          // -------- Step 7. Do inference synchronously --------
          infer_request.infer();
          // -------- Step 8. Get infer result data --------
          Tensor output_tensor = infer_request.get_output_tensor(0);
          int output_length = (int)output_tensor.get_size();
          float[] output_data = output_tensor.get_data<float>(output_length);
          // -------- Step 9. Process reault  --------
          List<Rect> position_boxes = new List<Rect>();
          List<int> class_ids = new List<int>();
          List<float> confidences = new List<float>();
      
          for (int i = 0; i < 300; ++i)
          {
              if (output_data[6 * i + 1] > 0.5)
              {
                  class_ids.Add((int)output_data[6 * i]);
                  confidences.Add(output_data[6 * i + 1]);
                  position_boxes.Add(new Rect((int)output_data[6 * i + 2], (int)output_data[6 * i + 3],
                      (int)(output_data[6 * i + 4] - output_data[6 * i + 2]),
                      (int)(output_data[6 * i + 5] - output_data[6 * i + 3])));
              }
          }
          for (int index = 0; index < class_ids.Count; index++)
          {
              Cv2.Rectangle(image, position_boxes[index], new Scalar(0, 0, 255), 2, LineTypes.Link8);
              Cv2.Rectangle(image, new OpenCvSharp.Point(position_boxes[index].TopLeft.X, position_boxes[index].TopLeft.Y + 30),
                  new OpenCvSharp.Point(position_boxes[index].BottomRight.X, position_boxes[index].TopLeft.Y), new Scalar(0, 255, 255), -1);
              Cv2.PutText(image, class_ids[index] + "-" + confidences[index].ToString("0.00"),
                  new OpenCvSharp.Point(position_boxes[index].X, position_boxes[index].Y + 25),
                  HersheyFonts.HersheySimplex, 0.8, new Scalar(0, 0, 0), 2);
          }
          string output_path = Path.Combine(Path.GetDirectoryName(Path.GetFullPath(image_path)),
              Path.GetFileNameWithoutExtension(image_path) + "_result.jpg");
          Cv2.ImWrite(output_path, image);
          Slog.INFO("The result save to " + output_path);
          Cv2.ImShow("Result", image);
          Cv2.WaitKey(0);
      }
      
      
      • 使用封裝的方法:
      static void ppyoloe_det_using_extensions(string model_path, string image_path, string device)
      {
          PPYoloeConfig config = new PPYoloeConfig();
          config.set_model(model_path);
          PPYoloeDet det = new PPYoloeDet(config);
          Mat image = Cv2.ImRead(image_path);
          DetResult result = det.predict(image);
          Mat result_im = Visualize.draw_det_result(result, image);
          Cv2.ImShow("Result", result_im);
          Cv2.WaitKey(0);
      }
      

      ? 下面為程序運行的主函數代碼,該代碼會下載轉換好的預測模型,并調用預測方法進行預測:

      static void Main(string[] args)
      {
          string model_path = "";
          string image_path = "";
          string device = "CPU";
          if (args.Length == 0)
          {
              if (!Directory.Exists("./model"))
              {
                  Directory.CreateDirectory("./model");
              }
              if (!File.Exists("./model/model.pdiparams")
                  && !File.Exists("./model/model.pdmodel"))
              {
                  if (!File.Exists("./model/ppyoloe_plus_crn_l_80e_coco.tar"))
                  {
                      _ = Download.download_file_async("https://github.com/guojin-yan/OpenVINO-CSharp-API-Samples/releases/download/Model/ppyoloe_plus_crn_l_80e_coco.tar",
                          "./model/ppyoloe_plus_crn_l_80e_coco.tar").Result;
                  }
                  Download.unzip("./model/ppyoloe_plus_crn_l_80e_coco.tar", "./model/");
              }
      
              if (!File.Exists("./model/test_image.jpg"))
              {
                  _ = Download.download_file_async("https://github.com/guojin-yan/OpenVINO-CSharp-API-Samples/releases/download/Image/test_det_02.jpg",
                      "./model/test_image.jpg").Result;
              }
              model_path = "./model/model.pdmodel";
              image_path = "./model/test_image.jpg";
          }
          else if (args.Length >= 2)
          {
              model_path = args[0];
              image_path = args[1];
              device = args[2];
          }
          else
          {
              Console.WriteLine("Please enter the correct command parameters, for example:");
              Console.WriteLine("> 1. dotnet run");
              Console.WriteLine("> 2. dotnet run <model path> <image path> <device name>");
          }
          // -------- Get OpenVINO runtime version --------
      
          OpenVinoSharp.Version version = Ov.get_openvino_version();
      
          Slog.INFO("---- OpenVINO INFO----");
          Slog.INFO("Description : " + version.description);
          Slog.INFO("Build number: " + version.buildNumber);
      
          Slog.INFO("Predict model files: " + model_path);
          Slog.INFO("Predict image  files: " + image_path);
          Slog.INFO("Inference device: " + device);
          Slog.INFO("Start RT-DETR model inference.");
      
          ppyoloe_det(model_path, image_path, device);
          //ppyoloe_det_with_process(model_path, image_path, device);
          //ppyoloe_det_using_extensions(model_path, image_path, device);
      }
      

      5. 總結

      ? 在該項目中,我們結合之前開發的 OpenVINO? C# API 項目部署PP-YOLOE模型,實現物體檢測。

      • 項目完整代碼鏈接為:
      https://github.com/guojin-yan/OpenVINO-CSharp-API-Samples/blob/master/model_samples/ppyoloe/ppyoloe_opencvsharp/Program.cs
      
      • 為了方便EmguCV用戶使用需求,同時開發了EmguCV版本,項目鏈接為:
      https://github.com/guojin-yan/OpenVINO-CSharp-API-Samples/blob/master/model_samples/ppyoloe/ppyoloe_emgucv/Program.cs
      

      最后如果各位開發者在使用中有任何問題,歡迎大家與我聯系。

      posted @ 2024-05-13 10:24  椒顏皮皮蝦  閱讀(764)  評論(0)    收藏  舉報
      主站蜘蛛池模板: 国产精品尤物乱码一区二区| 成人欧美一区二区三区在线观看| 女人下边被添全过视频的网址| 九九热久久只有精品2| 久久―日本道色综合久久| 日韩av一区二区三区在线| 真人性囗交视频| 天堂va欧美ⅴa亚洲va在线| 久久久亚洲欧洲日产国码606| 国产尤物精品自在拍视频首页| 精品无码一区二区三区电影| 亚洲国产成人久久精品app| 2019亚洲午夜无码天堂| 国产成人综合亚洲欧美日韩| 国产精品一二三中文字幕| 国产在线线精品宅男网址| 国产一级r片内射免费视频| 国产仑乱无码内谢| 国产精品久久久久乳精品爆| 美女一区二区三区亚洲麻豆| 国产精品无码不卡在线播放| 国产玖玖玖玖精品电影| 奇米四色7777中文字幕| 99久久久国产精品免费无卡顿| 国产综合久久久久鬼色| 国语精品一区二区三区| 亚洲一线二线三线品牌精华液久久久 | 国产av永久无码天堂影院 | 极品美女扒开粉嫩小泬图片| 亚洲精品第一区二区三区| 国产精品中文第一字幕| 黄色舔女人逼一区二区三区| 婷婷综合久久狠狠色成人网| 97视频精品全国免费观看| 精品国产乱一区二区三区| 亚洲高清国产拍精品熟女| 亚洲国产高清av网站| 亚洲精品一区二区三区综合| av在线播放国产一区| 国产中文字幕精品喷潮 | 人妻va精品va欧美va|