<output id="qn6qe"></output>

    1. <output id="qn6qe"><tt id="qn6qe"></tt></output>
    2. <strike id="qn6qe"></strike>

      亚洲 日本 欧洲 欧美 视频,日韩中文字幕有码av,一本一道av中文字幕无码,国产线播放免费人成视频播放,人妻少妇偷人无码视频,日夜啪啪一区二区三区,国产尤物精品自在拍视频首页,久热这里只有精品12

      Stay Hungry,Stay Foolish!

      fastapi-langgraph

      fastapi-langgraph

      https://github.com/fanqingsong/fastapi-langgraph

      FastAPI LangGraph Agent Template

      A production-ready FastAPI template for building AI agent applications with LangGraph integration. This template provides a robust foundation for building scalable, secure, and maintainable AI agent services.

      ?? Features

      • Production-Ready Architecture

        • FastAPI for high-performance async API endpoints
        • LangGraph integration for AI agent workflows
        • Langfuse for LLM observability and monitoring
        • Structured logging with environment-specific formatting
        • Rate limiting with configurable rules
        • PostgreSQL for data persistence
        • Docker and Docker Compose support
        • Prometheus metrics and Grafana dashboards for monitoring
      • Security

        • JWT-based authentication
        • Session management
        • Input sanitization
        • CORS configuration
        • Rate limiting protection
      • Developer Experience

        • Environment-specific configuration
        • Comprehensive logging system
        • Clear project structure
        • Type hints throughout
        • Easy local development setup
      • Model Evaluation Framework

        • Automated metric-based evaluation of model outputs
        • Integration with Langfuse for trace analysis
        • Detailed JSON reports with success/failure metrics
        • Interactive command-line interface
        • Customizable evaluation metrics

       

       

       

      """Chatbot API endpoints for handling chat interactions.
      
      This module provides endpoints for chat interactions, including regular chat,
      streaming chat, message history management, and chat history clearing.
      """
      
      import json
      from typing import List
      
      from fastapi import (
          APIRouter,
          Depends,
          HTTPException,
          Request,
      )
      from fastapi.responses import StreamingResponse
      
      from app.api.v1.auth import get_current_session
      from app.core.config import settings
      from app.core.langgraph.graph import LangGraphAgent
      from app.core.limiter import limiter
      from app.core.logging import logger
      from app.models.session import Session
      from app.schemas.chat import (
          ChatRequest,
          ChatResponse,
          Message,
          StreamResponse,
      )
      
      router = APIRouter()
      agent = LangGraphAgent()
      
      
      @router.post("/chat", response_model=ChatResponse)
      @limiter.limit(settings.RATE_LIMIT_ENDPOINTS["chat"][0])
      async def chat(
          request: Request,
          chat_request: ChatRequest,
          session: Session = Depends(get_current_session),
      ):
          """Process a chat request using LangGraph.
      
          Args:
              request: The FastAPI request object for rate limiting.
              chat_request: The chat request containing messages.
              session: The current session from the auth token.
      
          Returns:
              ChatResponse: The processed chat response.
      
          Raises:
              HTTPException: If there's an error processing the request.
          """
          try:
              logger.info(
                  "chat_request_received",
                  session_id=session.id,
                  message_count=len(chat_request.messages),
              )
      
              # Process the request through the LangGraph
              result = await agent.get_response(chat_request.messages, session.id, user_id=session.user_id)
      
              logger.info("chat_request_processed", session_id=session.id)
      
              return ChatResponse(messages=result)
          except Exception as e:
              logger.error("chat_request_failed", session_id=session.id, error=str(e), exc_info=True)
              raise HTTPException(status_code=500, detail=str(e))
      
      
      @router.post("/chat/stream")
      @limiter.limit(settings.RATE_LIMIT_ENDPOINTS["chat_stream"][0])
      async def chat_stream(
          request: Request,
          chat_request: ChatRequest,
          session: Session = Depends(get_current_session),
      ):
          """Process a chat request using LangGraph with streaming response.
      
          Args:
              request: The FastAPI request object for rate limiting.
              chat_request: The chat request containing messages.
              session: The current session from the auth token.
      
          Returns:
              StreamingResponse: A streaming response of the chat completion.
      
          Raises:
              HTTPException: If there's an error processing the request.
          """
          try:
              logger.info(
                  "stream_chat_request_received",
                  session_id=session.id,
                  message_count=len(chat_request.messages),
              )
      
              async def event_generator():
                  """Generate streaming events.
      
                  Yields:
                      str: Server-sent events in JSON format.
      
                  Raises:
                      Exception: If there's an error during streaming.
                  """
                  try:
                      full_response = ""
                      async for chunk in agent.get_stream_response(
                          chat_request.messages, session.id, user_id=session.user_id
                      ):
                          full_response += chunk
                          response = StreamResponse(content=chunk, done=False)
                          yield f"data: {json.dumps(response.model_dump())}\n\n"
      
                      # Send final message indicating completion
                      final_response = StreamResponse(content="", done=True)
                      yield f"data: {json.dumps(final_response.model_dump())}\n\n"
      
                  except Exception as e:
                      logger.error(
                          "stream_chat_request_failed",
                          session_id=session.id,
                          error=str(e),
                          exc_info=True,
                      )
                      error_response = StreamResponse(content=str(e), done=True)
                      yield f"data: {json.dumps(error_response.model_dump())}\n\n"
      
              return StreamingResponse(event_generator(), media_type="text/event-stream")
      
          except Exception as e:
              logger.error(
                  "stream_chat_request_failed",
                  session_id=session.id,
                  error=str(e),
                  exc_info=True,
              )
              raise HTTPException(status_code=500, detail=str(e))
      
      
      @router.get("/messages", response_model=ChatResponse)
      @limiter.limit(settings.RATE_LIMIT_ENDPOINTS["messages"][0])
      async def get_session_messages(
          request: Request,
          session: Session = Depends(get_current_session),
      ):
          """Get all messages for a session.
      
          Args:
              request: The FastAPI request object for rate limiting.
              session: The current session from the auth token.
      
          Returns:
              ChatResponse: All messages in the session.
      
          Raises:
              HTTPException: If there's an error retrieving the messages.
          """
          try:
              messages = await agent.get_chat_history(session.id)
              return ChatResponse(messages=messages)
          except Exception as e:
              logger.error("get_messages_failed", session_id=session.id, error=str(e), exc_info=True)
              raise HTTPException(status_code=500, detail=str(e))
      
      
      @router.delete("/messages")
      @limiter.limit(settings.RATE_LIMIT_ENDPOINTS["messages"][0])
      async def clear_chat_history(
          request: Request,
          session: Session = Depends(get_current_session),
      ):
          """Clear all messages for a session.
      
          Args:
              request: The FastAPI request object for rate limiting.
              session: The current session from the auth token.
      
          Returns:
              dict: A message indicating the chat history was cleared.
          """
          try:
              await agent.clear_chat_history(session.id)
              return {"message": "Chat history cleared successfully"}
          except Exception as e:
              logger.error("clear_chat_history_failed", session_id=session.id, error=str(e), exc_info=True)
              raise HTTPException(status_code=500, detail=str(e))

       

      posted @ 2025-09-21 21:32  lightsong  閱讀(29)  評論(0)    收藏  舉報
      千山鳥飛絕,萬徑人蹤滅
      主站蜘蛛池模板: 亚洲一区二区三午夜福利| 亚洲午夜福利网在线观看| 亚洲国产成人无码影片在线播放| 国语做受对白XXXXX在线| 久久精品国产久精国产一老狼| 亚洲国产区男人本色vr| 日韩一区二区三区在线视频| 欧美激情综合色综合啪啪五月| 久久亚洲精品情侣| 欧洲无码一区二区三区在线观看| 国产永久免费高清在线| 国产又色又爽又高潮免费| 亚洲一级片一区二区三区| 亚洲av永久无码精品天堂久久| 亚洲午夜久久久影院伊人| 成年午夜免费韩国做受视频| 深夜宅男福利免费在线观看| 国产在线中文字幕精品| 在线一区二区中文字幕| 在线观看美女网站大全免费| 粗大挺进朋友人妻淑娟| 平乡县| 成人av一区二区亚洲精| 亚洲中文字幕日产无码成人片| 草裙社区精品视频播放| 麻豆一区二区三区蜜桃免费| 中文字幕有码日韩精品| 亚洲乱色伦图片区小说| 亚洲精品自拍视频在线看| 国产毛1卡2卡3卡4卡免费观看| 久久香蕉国产线看观看猫咪av| 国产精品美女一区二区三| 欧美怡春院一区二区三区 | 重口SM一区二区三区视频| 黑人巨大亚洲一区二区久| 日韩国产成人精品视频| 性一交一乱一伦一| 亚洲永久一区二区三区在线| 五月婷婷深开心五月天| 日韩精品中文字幕有码| 国产精品国产三级国av|