You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I used the API key from Tongyi Qianwen, and selected the embedding model text-embedding-v2. Why is the embedding very slow in CPU mode, and is the backend downloading the text-embedding-v2 model?
The text was updated successfully, but these errors were encountered:
I used the API key from Tongyi Qianwen, and selected the embedding model text-embedding-v2. Why is the embedding very slow in CPU mode, and is the backend downloading the text-embedding-v2 model?
I still don't understand why we need the local text-embedding-v2. Additionally, the design of downloading the model during the chat is very poor, which is not conducive to the stability of the software design. It is recommended that the model should be downloaded and completed during startup and configuration.
Describe your problem
我使用了通义千问的API key,embedding模型选择text-embedding-v2,为什么CPU模式下embedding非常缓慢,并且后台在下载text-embedding-v2模型
I used the API key from Tongyi Qianwen, and selected the embedding model text-embedding-v2. Why is the embedding very slow in CPU mode, and is the backend downloading the text-embedding-v2 model?
The text was updated successfully, but these errors were encountered: