Filter and sort through our extensive collection of AI tools to find exactly what you need.
Weaviate is an AI developer–oriented tool used as part of the ecosystem for building, running, evaluating, or operating large language model (LLM) applications. Tools in this category commonly provide SDKs, CLIs, servers, or libraries that plug into an LLM stack to add capabilities such as retrieval, vector storage, evaluation, observability, guardrails, or model serving. Weaviate is typically used by engineers, data scientists, or ML platform teams as infrastructure rather than as a pure end-user consumer product. Exact features, hosting options, and licensing should always be confirmed in the official documentation.
vLLM is an AI developer–oriented tool used as part of the ecosystem for building, running, evaluating, or operating large language model (LLM) applications. Tools in this category commonly provide SDKs, CLIs, servers, or libraries that plug into an LLM stack to add capabilities such as retrieval, vector storage, evaluation, observability, guardrails, or model serving. vLLM is typically used by engineers, data scientists, or ML platform teams as infrastructure rather than as a pure end-user consumer product. Exact features, hosting options, and licensing should always be confirmed in the official documentation.
Unstructured is an AI developer–oriented tool used as part of the ecosystem for building, running, evaluating, or operating large language model (LLM) applications. Tools in this category commonly provide SDKs, CLIs, servers, or libraries that plug into an LLM stack to add capabilities such as retrieval, vector storage, evaluation, observability, guardrails, or model serving. Unstructured is typically used by engineers, data scientists, or ML platform teams as infrastructure rather than as a pure end-user consumer product. Exact features, hosting options, and licensing should always be confirmed in the official documentation.
TruLens is an AI developer–oriented tool used as part of the ecosystem for building, running, evaluating, or operating large language model (LLM) applications. Tools in this category commonly provide SDKs, CLIs, servers, or libraries that plug into an LLM stack to add capabilities such as retrieval, vector storage, evaluation, observability, guardrails, or model serving. TruLens is typically used by engineers, data scientists, or ML platform teams as infrastructure rather than as a pure end-user consumer product. Exact features, hosting options, and licensing should always be confirmed in the official documentation.
Text Generation WebUI is an AI developer–oriented tool used as part of the ecosystem for building, running, evaluating, or operating large language model (LLM) applications. Tools in this category commonly provide SDKs, CLIs, servers, or libraries that plug into an LLM stack to add capabilities such as retrieval, vector storage, evaluation, observability, guardrails, or model serving. Text Generation WebUI is typically used by engineers, data scientists, or ML platform teams as infrastructure rather than as a pure end-user consumer product. Exact features, hosting options, and licensing should always be confirmed in the official documentation.
Text Generation Inference is an AI developer–oriented tool used as part of the ecosystem for building, running, evaluating, or operating large language model (LLM) applications. Tools in this category commonly provide SDKs, CLIs, servers, or libraries that plug into an LLM stack to add capabilities such as retrieval, vector storage, evaluation, observability, guardrails, or model serving. Text Generation Inference is typically used by engineers, data scientists, or ML platform teams as infrastructure rather than as a pure end-user consumer product. Exact features, hosting options, and licensing should always be confirmed in the official documentation.
Qdrant is an AI developer–oriented tool used as part of the ecosystem for building, running, evaluating, or operating large language model (LLM) applications. Tools in this category commonly provide SDKs, CLIs, servers, or libraries that plug into an LLM stack to add capabilities such as retrieval, vector storage, evaluation, observability, guardrails, or model serving. Qdrant is typically used by engineers, data scientists, or ML platform teams as infrastructure rather than as a pure end-user consumer product. Exact features, hosting options, and licensing should always be confirmed in the official documentation.
promptfoo is an AI developer–oriented tool used as part of the ecosystem for building, running, evaluating, or operating large language model (LLM) applications. Tools in this category commonly provide SDKs, CLIs, servers, or libraries that plug into an LLM stack to add capabilities such as retrieval, vector storage, evaluation, observability, guardrails, or model serving. promptfoo is typically used by engineers, data scientists, or ML platform teams as infrastructure rather than as a pure end-user consumer product. Exact features, hosting options, and licensing should always be confirmed in the official documentation.
Phoenix (Arize) is an AI developer–oriented tool used as part of the ecosystem for building, running, evaluating, or operating large language model (LLM) applications. Tools in this category commonly provide SDKs, CLIs, servers, or libraries that plug into an LLM stack to add capabilities such as retrieval, vector storage, evaluation, observability, guardrails, or model serving. Phoenix (Arize) is typically used by engineers, data scientists, or ML platform teams as infrastructure rather than as a pure end-user consumer product. Exact features, hosting options, and licensing should always be confirmed in the official documentation.
pgvector is an AI developer–oriented tool used as part of the ecosystem for building, running, evaluating, or operating large language model (LLM) applications. Tools in this category commonly provide SDKs, CLIs, servers, or libraries that plug into an LLM stack to add capabilities such as retrieval, vector storage, evaluation, observability, guardrails, or model serving. pgvector is typically used by engineers, data scientists, or ML platform teams as infrastructure rather than as a pure end-user consumer product. Exact features, hosting options, and licensing should always be confirmed in the official documentation.
NeMo Guardrails is an AI developer–oriented tool used as part of the ecosystem for building, running, evaluating, or operating large language model (LLM) applications. Tools in this category commonly provide SDKs, CLIs, servers, or libraries that plug into an LLM stack to add capabilities such as retrieval, vector storage, evaluation, observability, guardrails, or model serving. NeMo Guardrails is typically used by engineers, data scientists, or ML platform teams as infrastructure rather than as a pure end-user consumer product. Exact features, hosting options, and licensing should always be confirmed in the official documentation.
Milvus is an AI developer–oriented tool used as part of the ecosystem for building, running, evaluating, or operating large language model (LLM) applications. Tools in this category commonly provide SDKs, CLIs, servers, or libraries that plug into an LLM stack to add capabilities such as retrieval, vector storage, evaluation, observability, guardrails, or model serving. Milvus is typically used by engineers, data scientists, or ML platform teams as infrastructure rather than as a pure end-user consumer product. Exact features, hosting options, and licensing should always be confirmed in the official documentation.
Langfuse is an AI developer–oriented tool used as part of the ecosystem for building, running, evaluating, or operating large language model (LLM) applications. Tools in this category commonly provide SDKs, CLIs, servers, or libraries that plug into an LLM stack to add capabilities such as retrieval, vector storage, evaluation, observability, guardrails, or model serving. Langfuse is typically used by engineers, data scientists, or ML platform teams as infrastructure rather than as a pure end-user consumer product. Exact features, hosting options, and licensing should always be confirmed in the official documentation.
Guardrails AI is an AI developer–oriented tool used as part of the ecosystem for building, running, evaluating, or operating large language model (LLM) applications. Tools in this category commonly provide SDKs, CLIs, servers, or libraries that plug into an LLM stack to add capabilities such as retrieval, vector storage, evaluation, observability, guardrails, or model serving. Guardrails AI is typically used by engineers, data scientists, or ML platform teams as infrastructure rather than as a pure end-user consumer product. Exact features, hosting options, and licensing should always be confirmed in the official documentation.
FastChat is an AI developer–oriented tool used as part of the ecosystem for building, running, evaluating, or operating large language model (LLM) applications. Tools in this category commonly provide SDKs, CLIs, servers, or libraries that plug into an LLM stack to add capabilities such as retrieval, vector storage, evaluation, observability, guardrails, or model serving. FastChat is typically used by engineers, data scientists, or ML platform teams as infrastructure rather than as a pure end-user consumer product. Exact features, hosting options, and licensing should always be confirmed in the official documentation.
Dify is an AI developer–oriented tool used as part of the ecosystem for building, running, evaluating, or operating large language model (LLM) applications. Tools in this category commonly provide SDKs, CLIs, servers, or libraries that plug into an LLM stack to add capabilities such as retrieval, vector storage, evaluation, observability, guardrails, or model serving. Dify is typically used by engineers, data scientists, or ML platform teams as infrastructure rather than as a pure end-user consumer product. Exact features, hosting options, and licensing should always be confirmed in the official documentation.
DeepEval is an AI developer–oriented tool used as part of the ecosystem for building, running, evaluating, or operating large language model (LLM) applications. Tools in this category commonly provide SDKs, CLIs, servers, or libraries that plug into an LLM stack to add capabilities such as retrieval, vector storage, evaluation, observability, guardrails, or model serving. DeepEval is typically used by engineers, data scientists, or ML platform teams as infrastructure rather than as a pure end-user consumer product. Exact features, hosting options, and licensing should always be confirmed in the official documentation.
Chroma is an AI developer–oriented tool used as part of the ecosystem for building, running, evaluating, or operating large language model (LLM) applications. Tools in this category commonly provide SDKs, CLIs, servers, or libraries that plug into an LLM stack to add capabilities such as retrieval, vector storage, evaluation, observability, guardrails, or model serving. Chroma is typically used by engineers, data scientists, or ML platform teams as infrastructure rather than as a pure end-user consumer product. Exact features, hosting options, and licensing should always be confirmed in the official documentation.
Braintrust is an AI developer–oriented tool used as part of the ecosystem for building, running, evaluating, or operating large language model (LLM) applications. Tools in this category commonly provide SDKs, CLIs, servers, or libraries that plug into an LLM stack to add capabilities such as retrieval, vector storage, evaluation, observability, guardrails, or model serving. Braintrust is typically used by engineers, data scientists, or ML platform teams as infrastructure rather than as a pure end-user consumer product. Exact features, hosting options, and licensing should always be confirmed in the official documentation.
BentoML is an AI developer–oriented tool used as part of the ecosystem for building, running, evaluating, or operating large language model (LLM) applications. Tools in this category commonly provide SDKs, CLIs, servers, or libraries that plug into an LLM stack to add capabilities such as retrieval, vector storage, evaluation, observability, guardrails, or model serving. BentoML is typically used by engineers, data scientists, or ML platform teams as infrastructure rather than as a pure end-user consumer product. Exact features, hosting options, and licensing should always be confirmed in the official documentation.