yym68686 commited on
Commit
25f3a29
·
1 Parent(s): 4d2d046

💰 Sponsors: Thanks to @PowerHunter for the CNY 200 sponsorship, sponsorship information has been added to the README.

Browse files
Files changed (3) hide show
  1. README.md +24 -4
  2. README_CN.md +21 -1
  3. main.py +2 -2
README.md CHANGED
@@ -23,10 +23,10 @@ For personal use, one/new-api is too complex with many commercial features that
23
  - Support OpenAI, Anthropic, Gemini, Vertex native tool use function calls.
24
  - Support OpenAI, Anthropic, Gemini, Vertex native image recognition API.
25
  - Support four types of load balancing.
26
- 1. Supports channel-level weighted load balancing, allowing requests to be distributed according to different channel weights. It is not enabled by default and requires configuring channel weights.
27
- 2. Support Vertex regional load balancing and high concurrency, which can increase Gemini and Claude concurrency by up to (number of APIs * number of regions) times. Automatically enabled without additional configuration.
28
- 3. Except for Vertex region-level load balancing, all APIs support channel-level sequential load balancing, enhancing the immersive translation experience. It is not enabled by default and requires configuring `SCHEDULING_ALGORITHM` as `round_robin`.
29
- 4. Support automatic API key-level round-robin load balancing for multiple API Keys in a single channel.
30
  - Support automatic retry, when an API channel response fails, automatically retry the next API channel.
31
  - Support fine-grained permission control. Support using wildcards to set specific models available for API key channels.
32
  - Support rate limiting, you can set the maximum number of requests per minute as an integer, such as 2/min, 2 times per minute, 5/hour, 5 times per hour, 10/day, 10 times per day, 10/month, 10 times per month, 10/year, 10 times per year. Default is 60/min.
@@ -301,6 +301,26 @@ curl -X POST http://127.0.0.1:8000/v1/chat/completions \
301
  -d '{"model": "gpt-4o","messages": [{"role": "user", "content": "Hello"}],"stream": true}'
302
  ```
303
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
304
  ## ⭐ Star History
305
 
306
  <a href="https://github.com/yym68686/uni-api/stargazers">
 
23
  - Support OpenAI, Anthropic, Gemini, Vertex native tool use function calls.
24
  - Support OpenAI, Anthropic, Gemini, Vertex native image recognition API.
25
  - Support four types of load balancing.
26
+ 1. Supports channel-level weighted load balancing, allowing requests to be distributed according to different channel weights. It is not enabled by default and requires configuring channel weights.
27
+ 2. Support Vertex regional load balancing and high concurrency, which can increase Gemini and Claude concurrency by up to (number of APIs * number of regions) times. Automatically enabled without additional configuration.
28
+ 3. Except for Vertex region-level load balancing, all APIs support channel-level sequential load balancing, enhancing the immersive translation experience. It is not enabled by default and requires configuring `SCHEDULING_ALGORITHM` as `round_robin`.
29
+ 4. Support automatic API key-level round-robin load balancing for multiple API Keys in a single channel.
30
  - Support automatic retry, when an API channel response fails, automatically retry the next API channel.
31
  - Support fine-grained permission control. Support using wildcards to set specific models available for API key channels.
32
  - Support rate limiting, you can set the maximum number of requests per minute as an integer, such as 2/min, 2 times per minute, 5/hour, 5 times per hour, 10/day, 10 times per day, 10/month, 10 times per month, 10/year, 10 times per year. Default is 60/min.
 
301
  -d '{"model": "gpt-4o","messages": [{"role": "user", "content": "Hello"}],"stream": true}'
302
  ```
303
 
304
+ ## Sponsors
305
+
306
+ We thank the following sponsors for their support:
307
+ <!-- ¥200 -->
308
+ - @PowerHunter: ¥200
309
+
310
+ ## How to sponsor us
311
+
312
+ If you would like to support our project, you can sponsor us in the following ways:
313
+
314
+ 1. [PayPal](https://www.paypal.me/yym68686)
315
+
316
+ 2. [USDT-TRC20](https://pb.yym68686.top/~USDT-TRC20), USDT-TRC20 wallet address: `TLFbqSv5pDu5he43mVmK1dNx7yBMFeN7d8`
317
+
318
+ 3. [WeChat](https://pb.yym68686.top/~wechat)
319
+
320
+ 4. [Alipay](https://pb.yym68686.top/~alipay)
321
+
322
+ Thank you for your support!
323
+
324
  ## ⭐ Star History
325
 
326
  <a href="https://github.com/yym68686/uni-api/stargazers">
README_CN.md CHANGED
@@ -301,7 +301,27 @@ curl -X POST http://127.0.0.1:8000/v1/chat/completions \
301
  -d '{"model": "gpt-4o","messages": [{"role": "user", "content": "Hello"}],"stream": true}'
302
  ```
303
 
304
- ## ⭐ Star History
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
305
 
306
  <a href="https://github.com/yym68686/uni-api/stargazers">
307
  <img width="500" alt="Star History Chart" src="https://api.star-history.com/svg?repos=yym68686/uni-api&type=Date">
 
301
  -d '{"model": "gpt-4o","messages": [{"role": "user", "content": "Hello"}],"stream": true}'
302
  ```
303
 
304
+ ## 赞助商
305
+
306
+ 我们感谢以下赞助商的支持:
307
+ <!-- ¥200 -->
308
+ - @PowerHunter:¥200
309
+
310
+ ## 如何赞助我们
311
+
312
+ 如果您想支持我们的项目,您可以通过以下方式赞助我们:
313
+
314
+ 1. [PayPal](https://www.paypal.me/yym68686)
315
+
316
+ 2. [USDT-TRC20](https://pb.yym68686.top/~USDT-TRC20),USDT-TRC20 钱包地址:`TLFbqSv5pDu5he43mVmK1dNx7yBMFeN7d8`
317
+
318
+ 3. [微信](https://pb.yym68686.top/~wechat)
319
+
320
+ 4. [支付宝](https://pb.yym68686.top/~alipay)
321
+
322
+ 感谢您的支持!
323
+
324
+ ## ⭐ Star 历史
325
 
326
  <a href="https://github.com/yym68686/uni-api/stargazers">
327
  <img width="500" alt="Star History Chart" src="https://api.star-history.com/svg?repos=yym68686/uni-api&type=Date">
main.py CHANGED
@@ -317,7 +317,7 @@ class LoggingStreamingResponse(Response):
317
  chunk = chunk.encode('utf-8')
318
  line = chunk.decode('utf-8')
319
  if is_debug:
320
- logger.info(f"{line}")
321
  if line.startswith("data:"):
322
  line = line.lstrip("data: ")
323
  if not line.startswith("[DONE]") and not line.startswith("OK"):
@@ -590,7 +590,7 @@ async def process_request(request: Union[RequestModel, ImageGenerationRequest, A
590
  wrapped_generator, first_response_time = await error_handling_wrapper(generator)
591
  first_element = await anext(wrapped_generator)
592
  first_element = first_element.lstrip("data: ")
593
- print("first_element", first_element)
594
  first_element = json.loads(first_element)
595
  response = StarletteStreamingResponse(iter([json.dumps(first_element)]), media_type="application/json")
596
  # response = JSONResponse(first_element)
 
317
  chunk = chunk.encode('utf-8')
318
  line = chunk.decode('utf-8')
319
  if is_debug:
320
+ logger.info(f"{line.encode('utf-8').decode('unicode_escape')}")
321
  if line.startswith("data:"):
322
  line = line.lstrip("data: ")
323
  if not line.startswith("[DONE]") and not line.startswith("OK"):
 
590
  wrapped_generator, first_response_time = await error_handling_wrapper(generator)
591
  first_element = await anext(wrapped_generator)
592
  first_element = first_element.lstrip("data: ")
593
+ # print("first_element", first_element)
594
  first_element = json.loads(first_element)
595
  response = StarletteStreamingResponse(iter([json.dumps(first_element)]), media_type="application/json")
596
  # response = JSONResponse(first_element)