<output id="qn6qe"></output>

    1. <output id="qn6qe"><tt id="qn6qe"></tt></output>
    2. <strike id="qn6qe"></strike>

      亚洲 日本 欧洲 欧美 视频,日韩中文字幕有码av,一本一道av中文字幕无码,国产线播放免费人成视频播放,人妻少妇偷人无码视频,日夜啪啪一区二区三区,国产尤物精品自在拍视频首页,久热这里只有精品12

      我的網站集成ElasticSearch初體驗

         最近,我給我的網站(https://www.xiandanplay.com/)嘗試集成了一下es來實現我的一個搜索功能,因為這個是我第一次了解運用elastic,所以如果有不對的地方,大家可以指出來,話不多說,先看看我的一個大致流程

            這里我采用的sdk的版本是Elastic.Clients.Elasticsearch, Version=8.0.0.0,官方的網址Installation | Elasticsearch .NET Client [8.0] | Elastic

            我的es最開始打算和我的應用程序一起部署到ubuntu上面,結果最后安裝kibana的時候,各種問題,雖好無奈,只好和我的SqlServer一起安裝到windows上面,對于一個2G內容的服務器來說,屬實有點遭罪了。

      1、配置es

       在es里面,我開啟了密碼認證。下面是我的配置

      "Search": {
          "IsEnable": "true",
          "Uri": "http://127.0.0.1:9200/",
          "User": "123",
          "Password": "123"
        }
      然后新增一個程序集

      然后再ElasticsearchClient里面去寫一個構造函數去配置es

      using Core.Common;
      using Core.CPlatform;
      using Core.SearchEngine.Attr;
      using Elastic.Clients.Elasticsearch;
      using Elastic.Clients.Elasticsearch.IndexManagement;
      using Elastic.Transport;
      
      namespace Core.SearchEngine.Client
      {
          public class ElasticSearchClient : IElasticSearchClient
          {
              private ElasticsearchClient elasticsearchClient;
              public ElasticSearchClient()
              {
                  string uri = ConfigureProvider.configuration.GetSection("Search:Uri").Value;
                  string username = ConfigureProvider.configuration.GetSection("Search:User").Value;
                  string password = ConfigureProvider.configuration.GetSection("Search:Password").Value;
                  var settings = new ElasticsearchClientSettings(new Uri(uri))
                                .Authentication(new BasicAuthentication(username, password)).DisableDirectStreaming();
                  elasticsearchClient = new ElasticsearchClient(settings);
              }
              public ElasticsearchClient GetClient()
              {
                  return elasticsearchClient;
              }
          }
      }
      

         然后,我們看skd的官網有這個這個提示

       客戶端應用程序應創建一個 該實例,該實例在整個應用程序中用于整個應用程序 輩子。在內部,客戶端管理和維護與節點的 HTTP 連接, 重復使用它們以優化性能。如果您使用依賴項注入 容器中,客戶端實例應注冊到 單例生存期

      所以我直接給它來一個AddSingleton

      using Core.SearchEngine.Client;
      using Microsoft.Extensions.DependencyInjection;
      
      namespace Core.SearchEngine
      {
          public static class ConfigureSearchEngine
          {
              public static void AddSearchEngine(this IServiceCollection services)
              {
                  services.AddSingleton<IElasticSearchClient, ElasticSearchClient>();
              }
          }
      }

      2、提交文章并且同步到es

       然后就是同步文章到es了,我是先寫入數據庫,再同步到rabbitmq,通過事件總線(基于事件總線EventBus實現郵件推送功能)寫入到es

      先定義一個es模型

      using Core.SearchEngine.Attr;
      using System;
      using System.Collections.Generic;
      using System.Linq;
      using System.Text;
      using System.Threading.Tasks;
      using XianDan.Model.BizEnum;
      
      namespace XianDan.Domain.Article
      {
          [ElasticsearchIndex(IndexName ="t_article")]//自定義的特性,sdk并不包含這個特性
          public class Article_ES
          {
              public long Id { get; set; }
              /// <summary>
              /// 作者
              /// </summary>
              public string Author { get; set; }
              /// <summary>
              /// 標題                                                                               
              /// </summary>
              public string Title { get; set; }
              /// <summary>
              /// 標簽
              /// </summary>
              public string Tag { get; set; }
              /// <summary>
              /// 簡介                                                                              
              /// </summary>
              public string Description { get; set; }
              /// <summary>
              /// 內容
              /// </summary>
              public string ArticleContent { get; set; }
              /// <summary>
              /// 專欄
              /// </summary>
              public long ArticleCategoryId { get; set; }
              /// <summary>
              /// 是否原創
              /// </summary>
              public bool? IsOriginal { get; set; }
              /// <summary>
              /// 評論數
              /// </summary>
              public int? CommentCount { get; set; }
              /// <summary>
              /// 點贊數
              /// </summary>
              public int? PraiseCount { get; set; }
              /// <summary>
              /// 瀏覽次數
              /// </summary>
              public int? BrowserCount { get; set; }
              /// <summary>
              /// 收藏數量
              /// </summary>
              public int? CollectCount { get; set; }
              /// <summary>
              /// 創建時間
              /// </summary>
              public DateTime CreateTime { get; set; }
          }
      }

      然后創建索引

       string index = esArticleClient.GetIndexName(typeof(Article_ES));
                  await esArticleClient.GetClient().Indices.CreateAsync<Article_ES>(index, s =>
                  s.Mappings(
                      x => x.Properties(
                          t => t.LongNumber(l => l.Id)
                               .Text(l=>l.Title,z=>z.Analyzer(ik_max_word))
                               .Keyword(l=>l.Author)
                               .Text(l=>l.Tag,z=>z.Analyzer(ik_max_word))
                               .Text(l=>l.Description,z=>z.Analyzer(ik_max_word))
                               .Text(l=>l.ArticleContent,z=>z.Analyzer(ik_max_word))
                               .LongNumber(l=>l.ArticleCategoryId)
                               .Boolean(l=>l.IsOriginal)
                               .IntegerNumber(l=>l.BrowserCount)
                               .IntegerNumber(l=>l.PraiseCount)
                               .IntegerNumber(l=>l.PraiseCount)
                               .IntegerNumber(l=>l.CollectCount)
                               .IntegerNumber(l=>l.CommentCount)
                               .Date(l=>l.CreateTime)
                          )
                      )
                  );

      然后每次增刪改文章的時候寫入到mq,例如

       private async Task SendToMq(Article article, Operation operation)
              {
                  ArticleEventData articleEventData = new ArticleEventData();
                  articleEventData.Operation = operation;
                  articleEventData.Article_ES = MapperUtil.Map<Article, Article_ES>(article);
                  TaskRecord taskRecord = new TaskRecord();
                  taskRecord.Id = CreateEntityId();
                  taskRecord.TaskType = TaskRecordType.MQ;
                  taskRecord.TaskName = "發送文章";
                  taskRecord.TaskStartTime = DateTime.Now;
                  taskRecord.TaskStatu = (int)MqMessageStatu.New;
                  articleEventData.Unique = taskRecord.Id.ToString();
                  taskRecord.TaskValue = JsonConvert.SerializeObject(articleEventData);
                  await unitOfWork.GetRepository<TaskRecord>().InsertAsync(taskRecord);
                  await unitOfWork.CommitAsync();
                  try
                  {
                      eventBus.Publish(GetMqExchangeName(), ExchangeType.Direct, BizKey.ArticleQueueName, articleEventData);
                  }
                  catch (Exception ex)
                  {
                      var taskRecordRepository = unitOfWork.GetRepository<TaskRecord>();
                      TaskRecord update = await taskRecordRepository.SelectByIdAsync(taskRecord.Id);
                      update.TaskStatu = (int)MqMessageStatu.Fail;
                      update.LastUpdateTime = DateTime.Now;
                      update.TaskResult = "發送失敗";
                      update.AdditionalData = ex.Message;
                      await taskRecordRepository.UpdateAsync(update);
                      await unitOfWork.CommitAsync();
                  }
      
              }

      mq訂閱之后寫入es,具體的增刪改的方法就不寫了吧

      3、開始查詢es

        等待寫入文章之后,開始查詢文章,這里sdk提供的查詢的方法比較復雜,全都是通過lmbda一個個鏈式去拼接的,但是我又沒有找到更好的方法,所以就先這樣吧

         先創建一個集合存放查詢的表達式

      List<Action<QueryDescriptor<Article_ES>>> querys = new List<Action<QueryDescriptor<Article_ES>>>();

         然后定義一個幾個需要查詢的字段

         我這里使用MultiMatch來實現多個字段匹配同一個查詢條件,并且指定使用ik_smart分詞

      Field[] fields =
                      {
                          new Field("title"),
                          new Field("tag"),
                          new Field("articleContent"),
                          new Field("description")
                      };
       querys.Add(s => s.MultiMatch(y => y.Fields(Fields.FromFields(fields)).Analyzer(ik_smart).Query(keyword).Type(TextQueryType.MostFields)));

      定義查詢結果高亮,給查詢出來的匹配到的分詞的字段添加標簽,同時前端需要對這個樣式處理,

      :deep(.search-words) em {
          color: #ee0f29;
          font-style: initial;
      }
       Dictionary<Field, HighlightField> highlightFields = new Dictionary<Field, HighlightField>();
                  highlightFields.Add(new Field("title"), new HighlightField()
                  {
                      PreTags = new List<string> { "<em>" },
                      PostTags = new List<string> { "</em>" },
                  });
                  highlightFields.Add(new Field("description"), new HighlightField()
                  {
                      PreTags = new List<string> { "<em>" },
                      PostTags = new List<string> { "</em>" },
                  });
                  Highlight highlight = new Highlight()
                  {
                      Fields = highlightFields
                  };

      為了提高查詢的效率,我只查部分的字段

       SourceFilter sourceFilter = new SourceFilter();
                  sourceFilter.Includes = Fields.FromFields(new Field[] { "title", "id", "author", "description", "createTime", "browserCount", "commentCount" });
                  SourceConfig sourceConfig = new SourceConfig(sourceFilter);
                  Action<SearchRequestDescriptor<Article_ES>> configureRequest = s => s.Index(index)
                  .From((homeArticleCondition.CurrentPage - 1) * homeArticleCondition.PageSize)
                  .Size(homeArticleCondition.PageSize)
                  .Query(x => x.Bool(y => y.Must(querys.ToArray())))
                  .Source(sourceConfig)
                   .Sort(y => y.Field(ht => ht.CreateTime, new FieldSort() { Order=SortOrder.Desc}))

      獲取查詢的分詞結果

       var analyzeIndexRequest = new AnalyzeIndexRequest
                  {
                      Text = new string[] { keyword },
                      Analyzer = analyzer
                  };
                  var analyzeResponse = await elasticsearchClient.Indices.AnalyzeAsync(analyzeIndexRequest);
                  if (analyzeResponse.Tokens == null)
                      return new string[0];
                  return analyzeResponse.Tokens.Select(s => s.Token).ToArray();

      到此,這個就是大致的查詢結果,完整的如下

       public async Task<Core.SearchEngine.Response.SearchResponse<Article_ES>> SelectArticle(HomeArticleCondition homeArticleCondition)
              {
                  string keyword = homeArticleCondition.Keyword.Trim();
                  bool isNumber = Regex.IsMatch(keyword, RegexPattern.IsNumberPattern);
                  List<Action<QueryDescriptor<Article_ES>>> querys = new List<Action<QueryDescriptor<Article_ES>>>();
                  if (isNumber)
                  {
                      querys.Add(s => s.Bool(x => x.Should(
                          should => should.Term(f => f.Field(z => z.Title).Value(keyword))
                          , should => should.Term(f => f.Field(z => z.Tag).Value(keyword))
                          , should => should.Term(f => f.Field(z => z.ArticleContent).Value(keyword))
                          )));
                  }
                  else
                  {
                      Field[] fields =
                      {
                          new Field("title"),
                          new Field("tag"),
                          new Field("articleContent"),
                          new Field("description")
                      };
                      querys.Add(s => s.MultiMatch(y => y.Fields(Fields.FromFields(fields)).Analyzer(ik_smart).Query(keyword).Type(TextQueryType.MostFields)));
                  }
                  if (homeArticleCondition.ArticleCategoryId.HasValue)
                  {
                      querys.Add(s => s.Term(t => t.Field(f => f.ArticleCategoryId).Value(FieldValue.Long(homeArticleCondition.ArticleCategoryId.Value))));
                  }
                  string index = esArticleClient.GetIndexName(typeof(Article_ES));
                  Dictionary<Field, HighlightField> highlightFields = new Dictionary<Field, HighlightField>();
                  highlightFields.Add(new Field("title"), new HighlightField()
                  {
                      PreTags = new List<string> { "<em>" },
                      PostTags = new List<string> { "</em>" },
                  });
                  highlightFields.Add(new Field("description"), new HighlightField()
                  {
                      PreTags = new List<string> { "<em>" },
                      PostTags = new List<string> { "</em>" },
                  });
                  Highlight highlight = new Highlight()
                  {
                      Fields = highlightFields
                  };
                  SourceFilter sourceFilter = new SourceFilter();
                  sourceFilter.Includes = Fields.FromFields(new Field[] { "title", "id", "author", "description", "createTime", "browserCount", "commentCount" });
                  SourceConfig sourceConfig = new SourceConfig(sourceFilter);
                  Action<SearchRequestDescriptor<Article_ES>> configureRequest = s => s.Index(index)
                  .From((homeArticleCondition.CurrentPage - 1) * homeArticleCondition.PageSize)
                  .Size(homeArticleCondition.PageSize)
                  .Query(x => x.Bool(y => y.Must(querys.ToArray())))
                  .Source(sourceConfig)
                   .Sort(y => y.Field(ht => ht.CreateTime, new FieldSort() { Order=SortOrder.Desc})).Highlight(highlight);
                  var resp = await esArticleClient.GetClient().SearchAsync<Article_ES>(configureRequest);
                  foreach (var item in resp.Hits)
                  {
                      if (item.Highlight == null)
                          continue;
                      foreach (var dict in item.Highlight)
                      {
                          switch (dict.Key)
                          {
                              case "title":
                                  item.Source.Title = string.Join("...", dict.Value);
                                  break;
                              case "description":
                                  item.Source.Description = string.Join("...", dict.Value);
                                  break;
      
                          }
                      }
                  }
                  string[] analyzeWords = await esArticleClient.AnalyzeAsync(homeArticleCondition.Keyword);
                  List<Article_ES> articles = resp.Documents.ToList();
                  return new Core.SearchEngine.Response.SearchResponse<Article_ES>(articles, analyzeWords);
              }

      4、演示效果    

      搞完之后,發布部署,看看效果,分詞這里要想做的像百度那樣,估計目前來看非常有難度的

         那么這里我也向大家求教一下,如何使用SearchRequest封裝多個查詢條件,如下

      SearchRequest searchRequest = new SearchRequest();
       searchRequest.From = 0;
      searchRequest.Size = 10;
        searchRequest.Query=多個查詢條件

      因為我覺得這樣代碼讀起來比lambda可讀性高些,能更好的動態封裝。

       
      posted @ 2024-09-22 12:11  灬丶  閱讀(1443)  評論(9)    收藏  舉報
      主站蜘蛛池模板: 亚洲深夜精品在线观看| 久热久精久品这里在线观看| 狠狠色综合网站久久久久久久| 4399理论片午午伦夜理片| 亚洲AV无码久久精品日韩| 乱60一70归性欧老妇| 国产又爽又大又黄a片| 一区二区三区综合在线视频| 久久午夜无码免费| 亚洲国产超清无码专区| 亚洲国产超清无码专区| 中文字幕人妻不卡精品| 精品国产成人网站一区在线| 91蜜臀国产自产在线观看| 国产精品黄色精品黄色大片| 日韩精品一区二区三区激情视频| 国产精品天干天干综合网| 中国女人大白屁股ass| 久久综合国产精品一区二区| 日本精品一区二区不卡| 18禁午夜宅男成年网站| 人人做人人妻人人精| 国产精品理论片在线观看| 亚洲伊人久久综合成人| 精品国产一区二区三区四区阿崩| av永久免费网站在线观看| 广东省| 欧美大胆老熟妇乱子伦视频| 无码人妻一区二区三区AV| 国产初高中生视频在线观看| 中文字幕亚洲男人的天堂网络| 在线视频精品中文无码| 精品人妻午夜福利一区二区| 亚洲欧美高清在线精品一区二区| 日韩少妇人妻vs中文字幕| 中文字幕无码免费不卡视频| 中文字幕一区二区三区久久蜜桃| 国产欧美日韩精品丝袜高跟鞋| 垦利县| 国产欧美日韩精品丝袜高跟鞋| 精品久久久久久中文字幕202|