Skip to content

Commit

Permalink
Merge pull request #62 from tryAGI/bot/update-openapi_202409261520
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] committed Sep 26, 2024
2 parents fcdba18 + c2b5259 commit a921288
Show file tree
Hide file tree
Showing 15 changed files with 2,188 additions and 2,170 deletions.
25 changes: 14 additions & 11 deletions src/libs/Cohere/Generated/Cohere.CohereApi.Chatv2.g.cs
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,8 @@ partial void ProcessChatv2ResponseContent(

/// <summary>
/// Chat with the model<br/>
/// Generates a message from the model in response to a provided conversation. To learn how to use the Chat API with Streaming and RAG follow our Text Generation guides.
/// Generates a message from the model in response to a provided conversation. To learn more about the features of the Chat API follow our [Text Generation guides](https://docs.cohere.com/v2/docs/chat-api).<br/>
/// Follow the [Migration Guide](https://docs.cohere.com/v2/docs/migrating-v1-to-v2) for instructions on moving from API v1 to API v2.
/// </summary>
/// <param name="xClientName"></param>
/// <param name="request"></param>
Expand Down Expand Up @@ -112,15 +113,16 @@ partial void ProcessChatv2ResponseContent(

/// <summary>
/// Chat with the model<br/>
/// Generates a message from the model in response to a provided conversation. To learn how to use the Chat API with Streaming and RAG follow our Text Generation guides.
/// Generates a message from the model in response to a provided conversation. To learn more about the features of the Chat API follow our [Text Generation guides](https://docs.cohere.com/v2/docs/chat-api).<br/>
/// Follow the [Migration Guide](https://docs.cohere.com/v2/docs/migrating-v1-to-v2) for instructions on moving from API v1 to API v2.
/// </summary>
/// <param name="xClientName"></param>
/// <param name="model">
/// The name of a compatible [Cohere model](https://docs.cohere.com/docs/models) (such as command-r or command-r-plus) or the ID of a [fine-tuned](https://docs.cohere.com/docs/chat-fine-tuning) model.
/// The name of a compatible [Cohere model](https://docs.cohere.com/v2/docs/models) (such as command-r or command-r-plus) or the ID of a [fine-tuned](https://docs.cohere.com/v2/docs/chat-fine-tuning) model.
/// </param>
/// <param name="messages">
/// A list of chat messages in chronological order, representing a conversation between the user and the model.<br/>
/// Messages can be from `User`, `Assistant`, `Tool` and `System` roles. Learn more about messages and roles in [the Chat API guide](https://docs.cohere.com/docs/chat-api).
/// Messages can be from `User`, `Assistant`, `Tool` and `System` roles. Learn more about messages and roles in [the Chat API guide](https://docs.cohere.com/v2/docs/chat-api).
/// </param>
/// <param name="tools">
/// A list of available tools (functions) that the model may suggest invoking before producing a text response.<br/>
Expand All @@ -133,21 +135,22 @@ partial void ProcessChatv2ResponseContent(
/// Options for controlling citation generation.
/// </param>
/// <param name="responseFormat">
/// Configuration for forcing the model output to adhere to the specified format. Supported on [Command R](https://docs.cohere.com/docs/command-r), [Command R+](https://docs.cohere.com/docs/command-r-plus) and newer models.<br/>
/// The model can be forced into outputting JSON objects (with up to 5 levels of nesting) by setting `{ "type": "json_object" }`.<br/>
/// Configuration for forcing the model output to adhere to the specified format. Supported on [Command R](https://docs.cohere.com/v2/docs/command-r), [Command R+](https://docs.cohere.com/v2/docs/command-r-plus) and newer models.<br/>
/// The model can be forced into outputting JSON objects by setting `{ "type": "json_object" }`.<br/>
/// A [JSON Schema](https://json-schema.org/) can optionally be provided, to ensure a specific structure.<br/>
/// **Note**: When using `{ "type": "json_object" }` your `message` should always explicitly instruct the model to generate a JSON (eg: _"Generate a JSON ..."_) . Otherwise the model may end up getting stuck generating an infinite stream of characters and eventually run out of context length.<br/>
/// **Limitation**: The parameter is not supported in RAG mode (when any of `connectors`, `documents`, `tools`, `tool_results` are provided).
/// **Note**: When `json_schema` is not specified, the generated object can have up to 5 layers of nesting.<br/>
/// **Limitation**: The parameter is not supported when used in combinations with the `documents` or `tools` parameters.
/// </param>
/// <param name="safetyMode">
/// Used to select the [safety instruction](/docs/safety-modes) inserted into the prompt. Defaults to `CONTEXTUAL`.<br/>
/// Used to select the [safety instruction](https://docs.cohere.com/v2/docs/safety-modes) inserted into the prompt. Defaults to `CONTEXTUAL`.<br/>
/// When `OFF` is specified, the safety instruction will be omitted.<br/>
/// Safety modes are not yet configurable in combination with `tools`, `tool_results` and `documents` parameters.<br/>
/// **Note**: This parameter is only compatible with models [Command R 08-2024](/docs/command-r#august-2024-release), [Command R+ 08-2024](/docs/command-r-plus#august-2024-release) and newer.<br/>
/// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
/// **Note**: This parameter is only compatible with models [Command R 08-2024](https://docs.cohere.com/v2/docs/command-r#august-2024-release), [Command R+ 08-2024](https://docs.cohere.com/v2/docs/command-r-plus#august-2024-release) and newer.
/// </param>
/// <param name="maxTokens">
/// The maximum number of tokens the model will generate as part of the response. Note: Setting a low value may result in incomplete generations.
/// The maximum number of tokens the model will generate as part of the response.<br/>
/// **Note**: Setting a low value may result in incomplete generations.
/// </param>
/// <param name="stopSequences">
/// A list of up to 5 strings that the model will use to stop generation. If the model generates a string that matches any of the strings in the list, it will stop generating tokens and return the generated text up to that point not including the stop sequence.
Expand Down
25 changes: 14 additions & 11 deletions src/libs/Cohere/Generated/Cohere.ICohereApi.Chatv2.g.cs
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,8 @@ public partial interface ICohereApi
{
/// <summary>
/// Chat with the model<br/>
/// Generates a message from the model in response to a provided conversation. To learn how to use the Chat API with Streaming and RAG follow our Text Generation guides.
/// Generates a message from the model in response to a provided conversation. To learn more about the features of the Chat API follow our [Text Generation guides](https://docs.cohere.com/v2/docs/chat-api).<br/>
/// Follow the [Migration Guide](https://docs.cohere.com/v2/docs/migrating-v1-to-v2) for instructions on moving from API v1 to API v2.
/// </summary>
/// <param name="xClientName"></param>
/// <param name="request"></param>
Expand All @@ -19,15 +20,16 @@ public partial interface ICohereApi

/// <summary>
/// Chat with the model<br/>
/// Generates a message from the model in response to a provided conversation. To learn how to use the Chat API with Streaming and RAG follow our Text Generation guides.
/// Generates a message from the model in response to a provided conversation. To learn more about the features of the Chat API follow our [Text Generation guides](https://docs.cohere.com/v2/docs/chat-api).<br/>
/// Follow the [Migration Guide](https://docs.cohere.com/v2/docs/migrating-v1-to-v2) for instructions on moving from API v1 to API v2.
/// </summary>
/// <param name="xClientName"></param>
/// <param name="model">
/// The name of a compatible [Cohere model](https://docs.cohere.com/docs/models) (such as command-r or command-r-plus) or the ID of a [fine-tuned](https://docs.cohere.com/docs/chat-fine-tuning) model.
/// The name of a compatible [Cohere model](https://docs.cohere.com/v2/docs/models) (such as command-r or command-r-plus) or the ID of a [fine-tuned](https://docs.cohere.com/v2/docs/chat-fine-tuning) model.
/// </param>
/// <param name="messages">
/// A list of chat messages in chronological order, representing a conversation between the user and the model.<br/>
/// Messages can be from `User`, `Assistant`, `Tool` and `System` roles. Learn more about messages and roles in [the Chat API guide](https://docs.cohere.com/docs/chat-api).
/// Messages can be from `User`, `Assistant`, `Tool` and `System` roles. Learn more about messages and roles in [the Chat API guide](https://docs.cohere.com/v2/docs/chat-api).
/// </param>
/// <param name="tools">
/// A list of available tools (functions) that the model may suggest invoking before producing a text response.<br/>
Expand All @@ -40,21 +42,22 @@ public partial interface ICohereApi
/// Options for controlling citation generation.
/// </param>
/// <param name="responseFormat">
/// Configuration for forcing the model output to adhere to the specified format. Supported on [Command R](https://docs.cohere.com/docs/command-r), [Command R+](https://docs.cohere.com/docs/command-r-plus) and newer models.<br/>
/// The model can be forced into outputting JSON objects (with up to 5 levels of nesting) by setting `{ "type": "json_object" }`.<br/>
/// Configuration for forcing the model output to adhere to the specified format. Supported on [Command R](https://docs.cohere.com/v2/docs/command-r), [Command R+](https://docs.cohere.com/v2/docs/command-r-plus) and newer models.<br/>
/// The model can be forced into outputting JSON objects by setting `{ "type": "json_object" }`.<br/>
/// A [JSON Schema](https://json-schema.org/) can optionally be provided, to ensure a specific structure.<br/>
/// **Note**: When using `{ "type": "json_object" }` your `message` should always explicitly instruct the model to generate a JSON (eg: _"Generate a JSON ..."_) . Otherwise the model may end up getting stuck generating an infinite stream of characters and eventually run out of context length.<br/>
/// **Limitation**: The parameter is not supported in RAG mode (when any of `connectors`, `documents`, `tools`, `tool_results` are provided).
/// **Note**: When `json_schema` is not specified, the generated object can have up to 5 layers of nesting.<br/>
/// **Limitation**: The parameter is not supported when used in combinations with the `documents` or `tools` parameters.
/// </param>
/// <param name="safetyMode">
/// Used to select the [safety instruction](/docs/safety-modes) inserted into the prompt. Defaults to `CONTEXTUAL`.<br/>
/// Used to select the [safety instruction](https://docs.cohere.com/v2/docs/safety-modes) inserted into the prompt. Defaults to `CONTEXTUAL`.<br/>
/// When `OFF` is specified, the safety instruction will be omitted.<br/>
/// Safety modes are not yet configurable in combination with `tools`, `tool_results` and `documents` parameters.<br/>
/// **Note**: This parameter is only compatible with models [Command R 08-2024](/docs/command-r#august-2024-release), [Command R+ 08-2024](/docs/command-r-plus#august-2024-release) and newer.<br/>
/// Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments
/// **Note**: This parameter is only compatible with models [Command R 08-2024](https://docs.cohere.com/v2/docs/command-r#august-2024-release), [Command R+ 08-2024](https://docs.cohere.com/v2/docs/command-r-plus#august-2024-release) and newer.
/// </param>
/// <param name="maxTokens">
/// The maximum number of tokens the model will generate as part of the response. Note: Setting a low value may result in incomplete generations.
/// The maximum number of tokens the model will generate as part of the response.<br/>
/// **Note**: Setting a low value may result in incomplete generations.
/// </param>
/// <param name="stopSequences">
/// A list of up to 5 strings that the model will use to stop generation. If the model generates a string that matches any of the strings in the list, it will stop generating tokens and return the generated text up to that point not including the stop sequence.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ public sealed partial class AssistantMessage
public global::System.Collections.Generic.IList<global::Cohere.ToolCallV2>? ToolCalls { get; set; }

/// <summary>
///
/// A chain-of-thought style reflection and plan that the model generates when working with Tools.
/// </summary>
[global::System.Text.Json.Serialization.JsonPropertyName("tool_plan")]
public string? ToolPlan { get; set; }
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ public sealed partial class AssistantMessageResponse
public global::System.Collections.Generic.IList<global::Cohere.ToolCallV2>? ToolCalls { get; set; }

/// <summary>
///
/// A chain-of-thought style reflection and plan that the model generates when working with Tools.
/// </summary>
[global::System.Text.Json.Serialization.JsonPropertyName("tool_plan")]
public string? ToolPlan { get; set; }
Expand Down
29 changes: 11 additions & 18 deletions src/libs/Cohere/Generated/Cohere.Models.ChatFinishReason.g.cs
Original file line number Diff line number Diff line change
Expand Up @@ -4,38 +4,35 @@
namespace Cohere
{
/// <summary>
/// The reason a chat request has finished.
/// The reason a chat request has finished.<br/>
/// - **complete**: The model finished sending a complete message.<br/>
/// - **max_tokens**: The number of generated tokens exceeded the model's context length or the value specified via the `max_tokens` parameter.<br/>
/// - **stop_sequence**: One of the provided `stop_sequence` entries was reached in the model's generation.<br/>
/// - **tool_call**: The model generated a Tool Call and is expecting a Tool Message in return<br/>
/// - **error**: The generation failed due to an internal error
/// </summary>
public enum ChatFinishReason
{
/// <summary>
///
/// The model finished sending a complete message.
/// </summary>
Complete,
/// <summary>
///
/// One of the provided `stop_sequence` entries was reached in the model's generation.
/// </summary>
StopSequence,
/// <summary>
///
/// The number of generated tokens exceeded the model's context length or the value specified via the `max_tokens` parameter.
/// </summary>
MaxTokens,
/// <summary>
///
/// The model generated a Tool Call and is expecting a Tool Message in return
/// </summary>
ToolCall,
/// <summary>
///
/// The generation failed due to an internal error
/// </summary>
Error,
/// <summary>
///
/// </summary>
ContentBlocked,
/// <summary>
///
/// </summary>
ErrorLimit,
}

/// <summary>
Expand All @@ -55,8 +52,6 @@ public static string ToValueString(this ChatFinishReason value)
ChatFinishReason.MaxTokens => "max_tokens",
ChatFinishReason.ToolCall => "tool_call",
ChatFinishReason.Error => "error",
ChatFinishReason.ContentBlocked => "content_blocked",
ChatFinishReason.ErrorLimit => "error_limit",
_ => throw new global::System.ArgumentOutOfRangeException(nameof(value), value, null),
};
}
Expand All @@ -72,8 +67,6 @@ public static string ToValueString(this ChatFinishReason value)
"max_tokens" => ChatFinishReason.MaxTokens,
"tool_call" => ChatFinishReason.ToolCall,
"error" => ChatFinishReason.Error,
"content_blocked" => ChatFinishReason.ContentBlocked,
"error_limit" => ChatFinishReason.ErrorLimit,
_ => null,
};
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,12 @@ namespace Cohere
public sealed partial class ChatMessageEndEventVariant2Delta
{
/// <summary>
/// The reason a chat request has finished.
/// The reason a chat request has finished.<br/>
/// - **complete**: The model finished sending a complete message.<br/>
/// - **max_tokens**: The number of generated tokens exceeded the model's context length or the value specified via the `max_tokens` parameter.<br/>
/// - **stop_sequence**: One of the provided `stop_sequence` entries was reached in the model's generation.<br/>
/// - **tool_call**: The model generated a Tool Call and is expecting a Tool Message in return<br/>
/// - **error**: The generation failed due to an internal error
/// </summary>
[global::System.Text.Json.Serialization.JsonPropertyName("finish_reason")]
[global::System.Text.Json.Serialization.JsonConverter(typeof(global::Cohere.JsonConverters.ChatFinishReasonJsonConverter))]
Expand Down
7 changes: 6 additions & 1 deletion src/libs/Cohere/Generated/Cohere.Models.ChatResponse.g.cs
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,12 @@ public sealed partial class ChatResponse
public required string Id { get; set; }

/// <summary>
/// The reason a chat request has finished.
/// The reason a chat request has finished.<br/>
/// - **complete**: The model finished sending a complete message.<br/>
/// - **max_tokens**: The number of generated tokens exceeded the model's context length or the value specified via the `max_tokens` parameter.<br/>
/// - **stop_sequence**: One of the provided `stop_sequence` entries was reached in the model's generation.<br/>
/// - **tool_call**: The model generated a Tool Call and is expecting a Tool Message in return<br/>
/// - **error**: The generation failed due to an internal error
/// </summary>
[global::System.Text.Json.Serialization.JsonPropertyName("finish_reason")]
[global::System.Text.Json.Serialization.JsonConverter(typeof(global::Cohere.JsonConverters.ChatFinishReasonJsonConverter))]
Expand Down
Loading

0 comments on commit a921288

Please sign in to comment.