{
  "version": "2.0",
  "service": "<p>Describes the API operations for running inference using Amazon Bedrock models.</p>",
  "operations": {
    "ApplyGuardrail": "<p>The action to apply a guardrail.</p> <p>For troubleshooting some of the common errors you might encounter when using the <code>ApplyGuardrail</code> API, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/troubleshooting-api-error-codes.html\">Troubleshooting Amazon Bedrock API Error Codes</a> in the Amazon Bedrock User Guide</p>",
    "Converse": "<p>Sends messages to the specified Amazon Bedrock model. <code>Converse</code> provides a consistent interface that works with all models that support messages. This allows you to write code once and use it with different models. If a model has unique inference parameters, you can also pass those unique parameters to the model.</p> <p>Amazon Bedrock doesn't store any text, images, or documents that you provide as content. The data is only used to generate the response.</p> <p>You can submit a prompt by including it in the <code>messages</code> field, specifying the <code>modelId</code> of a foundation model or inference profile to run inference on it, and including any other fields that are relevant to your use case.</p> <p>You can also submit a prompt from Prompt management by specifying the ARN of the prompt version and including a map of variables to values in the <code>promptVariables</code> field. You can append more messages to the prompt by using the <code>messages</code> field. If you use a prompt from Prompt management, you can't include the following fields in the request: <code>additionalModelRequestFields</code>, <code>inferenceConfig</code>, <code>system</code>, or <code>toolConfig</code>. Instead, these fields must be defined through Prompt management. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-management-use.html\">Use a prompt from Prompt management</a>.</p> <p>For information about the Converse API, see <i>Use the Converse API</i> in the <i>Amazon Bedrock User Guide</i>. To use a guardrail, see <i>Use a guardrail with the Converse API</i> in the <i>Amazon Bedrock User Guide</i>. To use a tool with a model, see <i>Tool use (Function calling)</i> in the <i>Amazon Bedrock User Guide</i> </p> <p>For example code, see <i>Converse API examples</i> in the <i>Amazon Bedrock User Guide</i>. </p> <p>This operation requires permission for the <code>bedrock:InvokeModel</code> action. </p> <important> <p>To deny all inference access to resources that you specify in the modelId field, you need to deny access to the <code>bedrock:InvokeModel</code> and <code>bedrock:InvokeModelWithResponseStream</code> actions. Doing this also denies access to the resource through the base inference actions (<a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html\">InvokeModel</a> and <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html\">InvokeModelWithResponseStream</a>). For more information see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/security_iam_id-based-policy-examples.html#security_iam_id-based-policy-examples-deny-inference\">Deny access for inference on specific models</a>. </p> </important> <p>For troubleshooting some of the common errors you might encounter when using the <code>Converse</code> API, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/troubleshooting-api-error-codes.html\">Troubleshooting Amazon Bedrock API Error Codes</a> in the Amazon Bedrock User Guide</p>",
    "ConverseStream": "<p>Sends messages to the specified Amazon Bedrock model and returns the response in a stream. <code>ConverseStream</code> provides a consistent API that works with all Amazon Bedrock models that support messages. This allows you to write code once and use it with different models. Should a model have unique inference parameters, you can also pass those unique parameters to the model. </p> <p>To find out if a model supports streaming, call <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetFoundationModel.html\">GetFoundationModel</a> and check the <code>responseStreamingSupported</code> field in the response.</p> <note> <p>The CLI doesn't support streaming operations in Amazon Bedrock, including <code>ConverseStream</code>.</p> </note> <p>Amazon Bedrock doesn't store any text, images, or documents that you provide as content. The data is only used to generate the response.</p> <p>You can submit a prompt by including it in the <code>messages</code> field, specifying the <code>modelId</code> of a foundation model or inference profile to run inference on it, and including any other fields that are relevant to your use case.</p> <p>You can also submit a prompt from Prompt management by specifying the ARN of the prompt version and including a map of variables to values in the <code>promptVariables</code> field. You can append more messages to the prompt by using the <code>messages</code> field. If you use a prompt from Prompt management, you can't include the following fields in the request: <code>additionalModelRequestFields</code>, <code>inferenceConfig</code>, <code>system</code>, or <code>toolConfig</code>. Instead, these fields must be defined through Prompt management. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-management-use.html\">Use a prompt from Prompt management</a>.</p> <p>For information about the Converse API, see <i>Use the Converse API</i> in the <i>Amazon Bedrock User Guide</i>. To use a guardrail, see <i>Use a guardrail with the Converse API</i> in the <i>Amazon Bedrock User Guide</i>. To use a tool with a model, see <i>Tool use (Function calling)</i> in the <i>Amazon Bedrock User Guide</i> </p> <p>For example code, see <i>Conversation streaming example</i> in the <i>Amazon Bedrock User Guide</i>. </p> <p>This operation requires permission for the <code>bedrock:InvokeModelWithResponseStream</code> action.</p> <important> <p>To deny all inference access to resources that you specify in the modelId field, you need to deny access to the <code>bedrock:InvokeModel</code> and <code>bedrock:InvokeModelWithResponseStream</code> actions. Doing this also denies access to the resource through the base inference actions (<a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html\">InvokeModel</a> and <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html\">InvokeModelWithResponseStream</a>). For more information see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/security_iam_id-based-policy-examples.html#security_iam_id-based-policy-examples-deny-inference\">Deny access for inference on specific models</a>. </p> </important> <p>For troubleshooting some of the common errors you might encounter when using the <code>ConverseStream</code> API, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/troubleshooting-api-error-codes.html\">Troubleshooting Amazon Bedrock API Error Codes</a> in the Amazon Bedrock User Guide</p>",
    "CountTokens": "<p>Returns the token count for a given inference request. This operation helps you estimate token usage before sending requests to foundation models by returning the token count that would be used if the same input were sent to the model in an inference request.</p> <p>Token counting is model-specific because different models use different tokenization strategies. The token count returned by this operation will match the token count that would be charged if the same input were sent to the model in an <code>InvokeModel</code> or <code>Converse</code> request.</p> <p>You can use this operation to:</p> <ul> <li> <p>Estimate costs before sending inference requests.</p> </li> <li> <p>Optimize prompts to fit within token limits.</p> </li> <li> <p>Plan for token usage in your applications.</p> </li> </ul> <p>This operation accepts the same input formats as <code>InvokeModel</code> and <code>Converse</code>, allowing you to count tokens for both raw text inputs and structured conversation formats.</p> <p>The following operations are related to <code>CountTokens</code>:</p> <ul> <li> <p> <a href=\"https://docs.aws.amazon.com/bedrock/latest/API/API_runtime_InvokeModel.html\">InvokeModel</a> - Sends inference requests to foundation models</p> </li> <li> <p> <a href=\"https://docs.aws.amazon.com/bedrock/latest/API/API_runtime_Converse.html\">Converse</a> - Sends conversation-based inference requests to foundation models</p> </li> </ul>",
    "GetAsyncInvoke": "<p>Retrieve information about an asynchronous invocation.</p>",
    "InvokeModel": "<p>Invokes the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body. You use model inference to generate text, images, and embeddings.</p> <p>For example code, see <i>Invoke model code examples</i> in the <i>Amazon Bedrock User Guide</i>. </p> <p>This operation requires permission for the <code>bedrock:InvokeModel</code> action.</p> <important> <p>To deny all inference access to resources that you specify in the modelId field, you need to deny access to the <code>bedrock:InvokeModel</code> and <code>bedrock:InvokeModelWithResponseStream</code> actions. Doing this also denies access to the resource through the Converse API actions (<a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html\">Converse</a> and <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html\">ConverseStream</a>). For more information see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/security_iam_id-based-policy-examples.html#security_iam_id-based-policy-examples-deny-inference\">Deny access for inference on specific models</a>. </p> </important> <p>For troubleshooting some of the common errors you might encounter when using the <code>InvokeModel</code> API, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/troubleshooting-api-error-codes.html\">Troubleshooting Amazon Bedrock API Error Codes</a> in the Amazon Bedrock User Guide</p>",
    "InvokeModelWithBidirectionalStream": "<p>Invoke the specified Amazon Bedrock model to run inference using the bidirectional stream. The response is returned in a stream that remains open for 8 minutes. A single session can contain multiple prompts and responses from the model. The prompts to the model are provided as audio files and the model's responses are spoken back to the user and transcribed.</p> <p>It is possible for users to interrupt the model's response with a new prompt, which will halt the response speech. The model will retain contextual awareness of the conversation while pivoting to respond to the new prompt.</p>",
    "InvokeModelWithResponseStream": "<p>Invoke the specified Amazon Bedrock model to run inference using the prompt and inference parameters provided in the request body. The response is returned in a stream.</p> <p>To see if a model supports streaming, call <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_GetFoundationModel.html\">GetFoundationModel</a> and check the <code>responseStreamingSupported</code> field in the response.</p> <note> <p>The CLI doesn't support streaming operations in Amazon Bedrock, including <code>InvokeModelWithResponseStream</code>.</p> </note> <p>For example code, see <i>Invoke model with streaming code example</i> in the <i>Amazon Bedrock User Guide</i>. </p> <p>This operation requires permissions to perform the <code>bedrock:InvokeModelWithResponseStream</code> action. </p> <important> <p>To deny all inference access to resources that you specify in the modelId field, you need to deny access to the <code>bedrock:InvokeModel</code> and <code>bedrock:InvokeModelWithResponseStream</code> actions. Doing this also denies access to the resource through the Converse API actions (<a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html\">Converse</a> and <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html\">ConverseStream</a>). For more information see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/security_iam_id-based-policy-examples.html#security_iam_id-based-policy-examples-deny-inference\">Deny access for inference on specific models</a>. </p> </important> <p>For troubleshooting some of the common errors you might encounter when using the <code>InvokeModelWithResponseStream</code> API, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/troubleshooting-api-error-codes.html\">Troubleshooting Amazon Bedrock API Error Codes</a> in the Amazon Bedrock User Guide</p>",
    "ListAsyncInvokes": "<p>Lists asynchronous invocations.</p>",
    "StartAsyncInvoke": "<p>Starts an asynchronous invocation.</p> <p>This operation requires permission for the <code>bedrock:InvokeModel</code> action.</p> <important> <p>To deny all inference access to resources that you specify in the modelId field, you need to deny access to the <code>bedrock:InvokeModel</code> and <code>bedrock:InvokeModelWithResponseStream</code> actions. Doing this also denies access to the resource through the Converse API actions (<a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html\">Converse</a> and <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html\">ConverseStream</a>). For more information see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/security_iam_id-based-policy-examples.html#security_iam_id-based-policy-examples-deny-inference\">Deny access for inference on specific models</a>. </p> </important>"
  },
  "shapes": {
    "AccessDeniedException": {
      "base": "<p>The request is denied because you do not have sufficient permissions to perform the requested action. For troubleshooting this error, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/troubleshooting-api-error-codes.html#ts-access-denied\">AccessDeniedException</a> in the Amazon Bedrock User Guide</p>",
      "refs": {
      }
    },
    "AccountId": {
      "base": null,
      "refs": {
        "AsyncInvokeS3OutputDataConfig$bucketOwner": "<p>If the bucket belongs to another AWS account, specify that account's ID.</p>",
        "S3Location$bucketOwner": "<p>If the bucket belongs to another AWS account, specify that account's ID.</p>"
      }
    },
    "AnyToolChoice": {
      "base": "<p>The model must request at least one tool (no text is generated). For example, <code>{\"any\" : {}}</code>.</p>",
      "refs": {
        "ToolChoice$any": "<p>The model must request at least one tool (no text is generated).</p>"
      }
    },
    "ApplyGuardrailRequest": {
      "base": null,
      "refs": {
      }
    },
    "ApplyGuardrailResponse": {
      "base": null,
      "refs": {
      }
    },
    "AsyncInvokeArn": {
      "base": null,
      "refs": {
        "AsyncInvokeSummary$modelArn": "<p>The invoked model's ARN.</p>",
        "GetAsyncInvokeResponse$modelArn": "<p>The invocation's model ARN.</p>"
      }
    },
    "AsyncInvokeIdempotencyToken": {
      "base": null,
      "refs": {
        "AsyncInvokeSummary$clientRequestToken": "<p>The invocation's idempotency token.</p>",
        "GetAsyncInvokeResponse$clientRequestToken": "<p>The invocation's idempotency token.</p>",
        "StartAsyncInvokeRequest$clientRequestToken": "<p>Specify idempotency token to ensure that requests are not duplicated.</p>"
      }
    },
    "AsyncInvokeIdentifier": {
      "base": null,
      "refs": {
        "StartAsyncInvokeRequest$modelId": "<p>The model to invoke.</p>"
      }
    },
    "AsyncInvokeMessage": {
      "base": null,
      "refs": {
        "AsyncInvokeSummary$failureMessage": "<p>An error message.</p>",
        "GetAsyncInvokeResponse$failureMessage": "<p>An error message.</p>"
      }
    },
    "AsyncInvokeOutputDataConfig": {
      "base": "<p>Asynchronous invocation output data settings.</p>",
      "refs": {
        "AsyncInvokeSummary$outputDataConfig": "<p>The invocation's output data settings.</p>",
        "GetAsyncInvokeResponse$outputDataConfig": "<p>Output data settings.</p>",
        "StartAsyncInvokeRequest$outputDataConfig": "<p>Where to store the output.</p>"
      }
    },
    "AsyncInvokeS3OutputDataConfig": {
      "base": "<p>Asynchronous invocation output data settings.</p>",
      "refs": {
        "AsyncInvokeOutputDataConfig$s3OutputDataConfig": "<p>A storage location for the output data in an S3 bucket</p>"
      }
    },
    "AsyncInvokeStatus": {
      "base": null,
      "refs": {
        "AsyncInvokeSummary$status": "<p>The invocation's status.</p>",
        "GetAsyncInvokeResponse$status": "<p>The invocation's status.</p>",
        "ListAsyncInvokesRequest$statusEquals": "<p>Filter invocations by status.</p>"
      }
    },
    "AsyncInvokeSummaries": {
      "base": null,
      "refs": {
        "ListAsyncInvokesResponse$asyncInvokeSummaries": "<p>A list of invocation summaries.</p>"
      }
    },
    "AsyncInvokeSummary": {
      "base": "<p>A summary of an asynchronous invocation.</p>",
      "refs": {
        "AsyncInvokeSummaries$member": null
      }
    },
    "AutoToolChoice": {
      "base": "<p>The Model automatically decides if a tool should be called or whether to generate text instead. For example, <code>{\"auto\" : {}}</code>.</p>",
      "refs": {
        "ToolChoice$auto": "<p>(Default). The Model automatically decides if a tool should be called or whether to generate text instead. </p>"
      }
    },
    "AutomatedReasoningRuleIdentifier": {
      "base": null,
      "refs": {
        "GuardrailAutomatedReasoningRule$identifier": "<p>The unique identifier of the automated reasoning rule.</p>"
      }
    },
    "BidirectionalInputPayloadPart": {
      "base": "<p>Payload content for the bidirectional input. The input is an audio stream.</p>",
      "refs": {
        "InvokeModelWithBidirectionalStreamInput$chunk": "<p>The audio chunk that is used as input for the invocation step.</p>"
      }
    },
    "BidirectionalOutputPayloadPart": {
      "base": "<p>Output from the bidirectional stream. The output is speech and a text transcription.</p>",
      "refs": {
        "InvokeModelWithBidirectionalStreamOutput$chunk": "<p>The speech chunk that was provided as output from the invocation step.</p>"
      }
    },
    "Blob": {
      "base": null,
      "refs": {
        "ReasoningContentBlock$redactedContent": "<p>The content in the reasoning that was encrypted by the model provider for safety reasons. The encryption doesn't affect the quality of responses.</p>",
        "ReasoningContentBlockDelta$redactedContent": "<p>The content in the reasoning that was encrypted by the model provider for safety reasons. The encryption doesn't affect the quality of responses.</p>"
      }
    },
    "Body": {
      "base": null,
      "refs": {
        "InvokeModelRequest$body": "<p>The prompt and inference parameters in the format specified in the <code>contentType</code> in the header. You must provide the body in JSON format. To see the format and content of the request and response bodies for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/api-methods-run.html\">Run inference</a> in the Bedrock User Guide.</p>",
        "InvokeModelResponse$body": "<p>Inference response from the model in the format specified in the <code>contentType</code> header. To see the format and content of the request and response bodies for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>.</p>",
        "InvokeModelTokensRequest$body": "<p>The request body to count tokens for, formatted according to the model's expected input format. To learn about the input format for different models, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Model inference parameters and responses</a>.</p>",
        "InvokeModelWithResponseStreamRequest$body": "<p>The prompt and inference parameters in the format specified in the <code>contentType</code> in the header. You must provide the body in JSON format. To see the format and content of the request and response bodies for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/api-methods-run.html\">Run inference</a> in the Bedrock User Guide.</p>"
      }
    },
    "Boolean": {
      "base": null,
      "refs": {
        "CitationsConfig$enabled": "<p>Specifies whether document citations should be included in the model's response. When set to true, the model can generate citations that reference the source documents used to inform the response.</p>",
        "GuardrailContentFilter$detected": "<p>Indicates whether content that breaches the guardrail configuration is detected.</p>",
        "GuardrailContextualGroundingFilter$detected": "<p>Indicates whether content that fails the contextual grounding evaluation (grounding or relevance score less than the corresponding threshold) was detected.</p>",
        "GuardrailCustomWord$detected": "<p>Indicates whether custom word content that breaches the guardrail configuration is detected.</p>",
        "GuardrailManagedWord$detected": "<p>Indicates whether managed word content that breaches the guardrail configuration is detected.</p>",
        "GuardrailPiiEntityFilter$detected": "<p>Indicates whether personally identifiable information (PII) that breaches the guardrail configuration is detected.</p>",
        "GuardrailRegexFilter$detected": "<p>Indicates whether custom regex entities that breach the guardrail configuration are detected.</p>",
        "GuardrailTopic$detected": "<p>Indicates whether topic content that breaches the guardrail configuration is detected.</p>"
      }
    },
    "CachePointBlock": {
      "base": "<p>Defines a section of content to be cached for reuse in subsequent API calls.</p>",
      "refs": {
        "ContentBlock$cachePoint": "<p>CachePoint to include in the message.</p>",
        "SystemContentBlock$cachePoint": "<p>CachePoint to include in the system prompt.</p>",
        "Tool$cachePoint": "<p>CachePoint to include in the tool configuration.</p>"
      }
    },
    "CachePointType": {
      "base": null,
      "refs": {
        "CachePointBlock$type": "<p>Specifies the type of cache point within the CachePointBlock.</p>"
      }
    },
    "Citation": {
      "base": "<p>Contains information about a citation that references a specific source document. Citations provide traceability between the model's generated response and the source documents that informed that response.</p>",
      "refs": {
        "Citations$member": null
      }
    },
    "CitationGeneratedContent": {
      "base": "<p>Contains the generated text content that corresponds to or is supported by a citation from a source document.</p>",
      "refs": {
        "CitationGeneratedContentList$member": null
      }
    },
    "CitationGeneratedContentList": {
      "base": null,
      "refs": {
        "CitationsContentBlock$content": "<p>The generated content that is supported by the associated citations.</p>"
      }
    },
    "CitationLocation": {
      "base": "<p>Specifies the precise location within a source document where cited content can be found. This can include character-level positions, page numbers, or document chunks depending on the document type and indexing method.</p>",
      "refs": {
        "Citation$location": "<p>The precise location within the source document where the cited content can be found, including character positions, page numbers, or chunk identifiers.</p>",
        "CitationsDelta$location": "<p>Specifies the precise location within a source document where cited content can be found. This can include character-level positions, page numbers, or document chunks depending on the document type and indexing method.</p>"
      }
    },
    "CitationSourceContent": {
      "base": "<p>Contains the actual text content from a source document that is being cited or referenced in the model's response.</p>",
      "refs": {
        "CitationSourceContentList$member": null
      }
    },
    "CitationSourceContentDelta": {
      "base": "<p>Contains incremental updates to the source content text during streaming responses, allowing clients to build up the cited content progressively.</p>",
      "refs": {
        "CitationSourceContentListDelta$member": null
      }
    },
    "CitationSourceContentList": {
      "base": null,
      "refs": {
        "Citation$sourceContent": "<p>The specific content from the source document that was referenced or cited in the generated response.</p>"
      }
    },
    "CitationSourceContentListDelta": {
      "base": null,
      "refs": {
        "CitationsDelta$sourceContent": "<p>The specific content from the source document that was referenced or cited in the generated response.</p>"
      }
    },
    "Citations": {
      "base": null,
      "refs": {
        "CitationsContentBlock$citations": "<p>An array of citations that reference the source documents used to generate the associated content.</p>"
      }
    },
    "CitationsConfig": {
      "base": "<p>Configuration settings for enabling and controlling document citations in Converse API responses. When enabled, the model can include citation information that links generated content back to specific source documents.</p>",
      "refs": {
        "DocumentBlock$citations": "<p>Configuration settings that control how citations should be generated for this specific document.</p>"
      }
    },
    "CitationsContentBlock": {
      "base": "<p>A content block that contains both generated text and associated citation information. This block type is returned when document citations are enabled, providing traceability between the generated content and the source documents that informed the response.</p>",
      "refs": {
        "ContentBlock$citationsContent": "<p>A content block that contains both generated text and associated citation information, providing traceability between the response and source documents.</p>"
      }
    },
    "CitationsDelta": {
      "base": "<p>Contains incremental updates to citation information during streaming responses. This allows clients to build up citation data progressively as the response is generated.</p>",
      "refs": {
        "ContentBlockDelta$citation": "<p>Incremental citation information that is streamed as part of the response generation process.</p>"
      }
    },
    "ConflictException": {
      "base": "<p>Error occurred because of a conflict while performing an operation.</p>",
      "refs": {
      }
    },
    "ContentBlock": {
      "base": "<p>A block of content for a message that you pass to, or receive from, a model with the <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html\">Converse</a> or <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html\">ConverseStream</a> API operations.</p>",
      "refs": {
        "ContentBlocks$member": null
      }
    },
    "ContentBlockDelta": {
      "base": "<p>A block of content in a streaming response.</p>",
      "refs": {
        "ContentBlockDeltaEvent$delta": "<p>The delta for a content block delta event.</p>"
      }
    },
    "ContentBlockDeltaEvent": {
      "base": "<p>The content block delta event.</p>",
      "refs": {
        "ConverseStreamOutput$contentBlockDelta": "<p>The messages output content block delta.</p>"
      }
    },
    "ContentBlockStart": {
      "base": "<p>Content block start information.</p>",
      "refs": {
        "ContentBlockStartEvent$start": "<p>Start information about a content block start event. </p>"
      }
    },
    "ContentBlockStartEvent": {
      "base": "<p>Content block start event.</p>",
      "refs": {
        "ConverseStreamOutput$contentBlockStart": "<p>Start information for a content block.</p>"
      }
    },
    "ContentBlockStopEvent": {
      "base": "<p>A content block stop event.</p>",
      "refs": {
        "ConverseStreamOutput$contentBlockStop": "<p>Stop information for a content block.</p>"
      }
    },
    "ContentBlocks": {
      "base": null,
      "refs": {
        "Message$content": "<p>The message content. Note the following restrictions:</p> <ul> <li> <p>You can include up to 20 images. Each image's size, height, and width must be no more than 3.75 MB, 8000 px, and 8000 px, respectively.</p> </li> <li> <p>You can include up to five documents. Each document's size must be no more than 4.5 MB.</p> </li> <li> <p>If you include a <code>ContentBlock</code> with a <code>document</code> field in the array, you must also include a <code>ContentBlock</code> with a <code>text</code> field.</p> </li> <li> <p>You can only include images and documents if the <code>role</code> is <code>user</code>.</p> </li> </ul>"
      }
    },
    "ConversationRole": {
      "base": null,
      "refs": {
        "Message$role": "<p>The role that the message plays in the message.</p>",
        "MessageStartEvent$role": "<p>The role for the message.</p>"
      }
    },
    "ConversationalModelId": {
      "base": null,
      "refs": {
        "ConverseRequest$modelId": "<p>Specifies the model or throughput with which to run inference, or the prompt resource to use in inference. The value depends on the resource that you use:</p> <ul> <li> <p>If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html#model-ids-arns\">Amazon Bedrock base model IDs (on-demand throughput)</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use an inference profile, specify the inference profile ID or its ARN. For a list of inference profile IDs, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/cross-region-inference-support.html\">Supported Regions and models for cross-region inference</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/prov-thru-use.html\">Run inference using a Provisioned Throughput</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-customization-use.html\">Use a custom model in Amazon Bedrock</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>To include a prompt that was defined in <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-management.html\">Prompt management</a>, specify the ARN of the prompt version to use.</p> </li> </ul> <p>The Converse API doesn't support <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-customization-import-model.html\">imported models</a>.</p>",
        "ConverseStreamRequest$modelId": "<p>Specifies the model or throughput with which to run inference, or the prompt resource to use in inference. The value depends on the resource that you use:</p> <ul> <li> <p>If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html#model-ids-arns\">Amazon Bedrock base model IDs (on-demand throughput)</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use an inference profile, specify the inference profile ID or its ARN. For a list of inference profile IDs, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/cross-region-inference-support.html\">Supported Regions and models for cross-region inference</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/prov-thru-use.html\">Run inference using a Provisioned Throughput</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-customization-use.html\">Use a custom model in Amazon Bedrock</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>To include a prompt that was defined in <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-management.html\">Prompt management</a>, specify the ARN of the prompt version to use.</p> </li> </ul> <p>The Converse API doesn't support <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-customization-import-model.html\">imported models</a>.</p>"
      }
    },
    "ConverseMetrics": {
      "base": "<p>Metrics for a call to <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html\">Converse</a>.</p>",
      "refs": {
        "ConverseResponse$metrics": "<p>Metrics for the call to <code>Converse</code>.</p>"
      }
    },
    "ConverseOutput": {
      "base": "<p>The output from a call to <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html\">Converse</a>.</p>",
      "refs": {
        "ConverseResponse$output": "<p>The result from the call to <code>Converse</code>.</p>"
      }
    },
    "ConverseRequest": {
      "base": null,
      "refs": {
      }
    },
    "ConverseRequestAdditionalModelResponseFieldPathsList": {
      "base": null,
      "refs": {
        "ConverseRequest$additionalModelResponseFieldPaths": "<p>Additional model parameters field paths to return in the response. <code>Converse</code> and <code>ConverseStream</code> return the requested fields as a JSON Pointer object in the <code>additionalModelResponseFields</code> field. The following is example JSON for <code>additionalModelResponseFieldPaths</code>.</p> <p> <code>[ \"/stop_sequence\" ]</code> </p> <p>For information about the JSON Pointer syntax, see the <a href=\"https://datatracker.ietf.org/doc/html/rfc6901\">Internet Engineering Task Force (IETF)</a> documentation.</p> <p> <code>Converse</code> and <code>ConverseStream</code> reject an empty JSON Pointer or incorrectly structured JSON Pointer with a <code>400</code> error code. if the JSON Pointer is valid, but the requested field is not in the model response, it is ignored by <code>Converse</code>.</p>"
      }
    },
    "ConverseRequestAdditionalModelResponseFieldPathsListMemberString": {
      "base": null,
      "refs": {
        "ConverseRequestAdditionalModelResponseFieldPathsList$member": null
      }
    },
    "ConverseResponse": {
      "base": null,
      "refs": {
      }
    },
    "ConverseStreamMetadataEvent": {
      "base": "<p>A conversation stream metadata event.</p>",
      "refs": {
        "ConverseStreamOutput$metadata": "<p>Metadata for the converse output stream.</p>"
      }
    },
    "ConverseStreamMetrics": {
      "base": "<p>Metrics for the stream.</p>",
      "refs": {
        "ConverseStreamMetadataEvent$metrics": "<p>The metrics for the conversation stream metadata event.</p>"
      }
    },
    "ConverseStreamOutput": {
      "base": "<p>The messages output stream</p>",
      "refs": {
        "ConverseStreamResponse$stream": "<p>The output stream that the model generated.</p>"
      }
    },
    "ConverseStreamRequest": {
      "base": null,
      "refs": {
      }
    },
    "ConverseStreamRequestAdditionalModelResponseFieldPathsList": {
      "base": null,
      "refs": {
        "ConverseStreamRequest$additionalModelResponseFieldPaths": "<p>Additional model parameters field paths to return in the response. <code>Converse</code> and <code>ConverseStream</code> return the requested fields as a JSON Pointer object in the <code>additionalModelResponseFields</code> field. The following is example JSON for <code>additionalModelResponseFieldPaths</code>.</p> <p> <code>[ \"/stop_sequence\" ]</code> </p> <p>For information about the JSON Pointer syntax, see the <a href=\"https://datatracker.ietf.org/doc/html/rfc6901\">Internet Engineering Task Force (IETF)</a> documentation.</p> <p> <code>Converse</code> and <code>ConverseStream</code> reject an empty JSON Pointer or incorrectly structured JSON Pointer with a <code>400</code> error code. if the JSON Pointer is valid, but the requested field is not in the model response, it is ignored by <code>Converse</code>.</p>"
      }
    },
    "ConverseStreamRequestAdditionalModelResponseFieldPathsListMemberString": {
      "base": null,
      "refs": {
        "ConverseStreamRequestAdditionalModelResponseFieldPathsList$member": null
      }
    },
    "ConverseStreamResponse": {
      "base": null,
      "refs": {
      }
    },
    "ConverseStreamTrace": {
      "base": "<p>The trace object in a response from <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html\">ConverseStream</a>. Currently, you can only trace guardrails.</p>",
      "refs": {
        "ConverseStreamMetadataEvent$trace": "<p>The trace object in the response from <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html\">ConverseStream</a> that contains information about the guardrail behavior.</p>"
      }
    },
    "ConverseTokensRequest": {
      "base": "<p>The inputs from a <code>Converse</code> API request for token counting.</p> <p>This structure mirrors the input format for the <code>Converse</code> operation, allowing you to count tokens for conversation-based inference requests.</p>",
      "refs": {
        "CountTokensInput$converse": "<p>A <code>Converse</code> request for which to count tokens. Use this field when you want to count tokens for a conversation-based input that would be sent to the <code>Converse</code> operation.</p>"
      }
    },
    "ConverseTrace": {
      "base": "<p>The trace object in a response from <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html\">Converse</a>. Currently, you can only trace guardrails.</p>",
      "refs": {
        "ConverseResponse$trace": "<p>A trace object that contains information about the Guardrail behavior.</p>"
      }
    },
    "CountTokensInput": {
      "base": "<p>The input value for token counting. The value should be either an <code>InvokeModel</code> or <code>Converse</code> request body. </p>",
      "refs": {
        "CountTokensRequest$input": "<p>The input for which to count tokens. The structure of this parameter depends on whether you're counting tokens for an <code>InvokeModel</code> or <code>Converse</code> request:</p> <ul> <li> <p>For <code>InvokeModel</code> requests, provide the request body in the <code>invokeModel</code> field</p> </li> <li> <p>For <code>Converse</code> requests, provide the messages and system content in the <code>converse</code> field</p> </li> </ul> <p>The input format must be compatible with the model specified in the <code>modelId</code> parameter.</p>"
      }
    },
    "CountTokensRequest": {
      "base": null,
      "refs": {
      }
    },
    "CountTokensResponse": {
      "base": null,
      "refs": {
      }
    },
    "Document": {
      "base": null,
      "refs": {
        "ConverseRequest$additionalModelRequestFields": "<p>Additional inference parameters that the model supports, beyond the base set of inference parameters that <code>Converse</code> and <code>ConverseStream</code> support in the <code>inferenceConfig</code> field. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Model parameters</a>.</p>",
        "ConverseResponse$additionalModelResponseFields": "<p>Additional fields in the response that are unique to the model. </p>",
        "ConverseStreamRequest$additionalModelRequestFields": "<p>Additional inference parameters that the model supports, beyond the base set of inference parameters that <code>Converse</code> and <code>ConverseStream</code> support in the <code>inferenceConfig</code> field. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Model parameters</a>.</p>",
        "MessageStopEvent$additionalModelResponseFields": "<p>The additional model response fields.</p>",
        "ToolInputSchema$json": "<p>The JSON schema for the tool. For more information, see <a href=\"https://json-schema.org/understanding-json-schema/reference\">JSON Schema Reference</a>.</p>",
        "ToolResultContentBlock$json": "<p>A tool result that is JSON format data.</p>",
        "ToolUseBlock$input": "<p>The input to pass to the tool. </p>"
      }
    },
    "DocumentBlock": {
      "base": "<p>A document to include in a message.</p>",
      "refs": {
        "ContentBlock$document": "<p>A document to include in the message.</p>",
        "ToolResultContentBlock$document": "<p>A tool result that is a document.</p>"
      }
    },
    "DocumentBlockNameString": {
      "base": null,
      "refs": {
        "DocumentBlock$name": "<p>A name for the document. The name can only contain the following characters:</p> <ul> <li> <p>Alphanumeric characters</p> </li> <li> <p>Whitespace characters (no more than one in a row)</p> </li> <li> <p>Hyphens</p> </li> <li> <p>Parentheses</p> </li> <li> <p>Square brackets</p> </li> </ul> <note> <p>This field is vulnerable to prompt injections, because the model might inadvertently interpret it as instructions. Therefore, we recommend that you specify a neutral name.</p> </note>"
      }
    },
    "DocumentCharLocation": {
      "base": "<p>Specifies a character-level location within a document, providing precise positioning information for cited content using start and end character indices.</p>",
      "refs": {
        "CitationLocation$documentChar": "<p>The character-level location within the document where the cited content is found.</p>"
      }
    },
    "DocumentCharLocationDocumentIndexInteger": {
      "base": null,
      "refs": {
        "DocumentCharLocation$documentIndex": "<p>The index of the document within the array of documents provided in the request.</p>"
      }
    },
    "DocumentCharLocationEndInteger": {
      "base": null,
      "refs": {
        "DocumentCharLocation$end": "<p>The ending character position of the cited content within the document.</p>"
      }
    },
    "DocumentCharLocationStartInteger": {
      "base": null,
      "refs": {
        "DocumentCharLocation$start": "<p>The starting character position of the cited content within the document.</p>"
      }
    },
    "DocumentChunkLocation": {
      "base": "<p>Specifies a chunk-level location within a document, providing positioning information for cited content using logical document segments or chunks.</p>",
      "refs": {
        "CitationLocation$documentChunk": "<p>The chunk-level location within the document where the cited content is found, typically used for documents that have been segmented into logical chunks.</p>"
      }
    },
    "DocumentChunkLocationDocumentIndexInteger": {
      "base": null,
      "refs": {
        "DocumentChunkLocation$documentIndex": "<p>The index of the document within the array of documents provided in the request.</p>"
      }
    },
    "DocumentChunkLocationEndInteger": {
      "base": null,
      "refs": {
        "DocumentChunkLocation$end": "<p>The ending chunk identifier or index of the cited content within the document.</p>"
      }
    },
    "DocumentChunkLocationStartInteger": {
      "base": null,
      "refs": {
        "DocumentChunkLocation$start": "<p>The starting chunk identifier or index of the cited content within the document.</p>"
      }
    },
    "DocumentContentBlock": {
      "base": "<p>Contains the actual content of a document that can be processed by the model and potentially cited in the response.</p>",
      "refs": {
        "DocumentContentBlocks$member": null
      }
    },
    "DocumentContentBlocks": {
      "base": null,
      "refs": {
        "DocumentSource$content": "<p>The structured content of the document source, which may include various content blocks such as text, images, or other document elements.</p>"
      }
    },
    "DocumentFormat": {
      "base": null,
      "refs": {
        "DocumentBlock$format": "<p>The format of a document, or its extension.</p>"
      }
    },
    "DocumentPageLocation": {
      "base": "<p>Specifies a page-level location within a document, providing positioning information for cited content using page numbers.</p>",
      "refs": {
        "CitationLocation$documentPage": "<p>The page-level location within the document where the cited content is found.</p>"
      }
    },
    "DocumentPageLocationDocumentIndexInteger": {
      "base": null,
      "refs": {
        "DocumentPageLocation$documentIndex": "<p>The index of the document within the array of documents provided in the request.</p>"
      }
    },
    "DocumentPageLocationEndInteger": {
      "base": null,
      "refs": {
        "DocumentPageLocation$end": "<p>The ending page number of the cited content within the document.</p>"
      }
    },
    "DocumentPageLocationStartInteger": {
      "base": null,
      "refs": {
        "DocumentPageLocation$start": "<p>The starting page number of the cited content within the document.</p>"
      }
    },
    "DocumentSource": {
      "base": "<p>Contains the content of a document.</p>",
      "refs": {
        "DocumentBlock$source": "<p>Contains the content of the document.</p>"
      }
    },
    "DocumentSourceBytesBlob": {
      "base": null,
      "refs": {
        "DocumentSource$bytes": "<p>The raw bytes for the document. If you use an Amazon Web Services SDK, you don't need to encode the bytes in base64.</p>"
      }
    },
    "FoundationModelVersionIdentifier": {
      "base": "<p>ARN or ID of a Bedrock model</p>",
      "refs": {
        "CountTokensRequest$modelId": "<p>The unique identifier or ARN of the foundation model to use for token counting. Each model processes tokens differently, so the token count is specific to the model you specify.</p>"
      }
    },
    "GetAsyncInvokeRequest": {
      "base": null,
      "refs": {
      }
    },
    "GetAsyncInvokeResponse": {
      "base": null,
      "refs": {
      }
    },
    "GuardrailAction": {
      "base": null,
      "refs": {
        "ApplyGuardrailResponse$action": "<p>The action taken in the response from the guardrail.</p>"
      }
    },
    "GuardrailAssessment": {
      "base": "<p>A behavior assessment of the guardrail policies used in a call to the Converse API. </p>",
      "refs": {
        "GuardrailAssessmentList$member": null,
        "GuardrailAssessmentMap$value": null
      }
    },
    "GuardrailAssessmentList": {
      "base": null,
      "refs": {
        "ApplyGuardrailResponse$assessments": "<p>The assessment details in the response from the guardrail.</p>",
        "GuardrailAssessmentListMap$value": null
      }
    },
    "GuardrailAssessmentListMap": {
      "base": null,
      "refs": {
        "GuardrailTraceAssessment$outputAssessments": "<p>the output assessments.</p>"
      }
    },
    "GuardrailAssessmentMap": {
      "base": null,
      "refs": {
        "GuardrailTraceAssessment$inputAssessment": "<p>The input assessment.</p>"
      }
    },
    "GuardrailAutomatedReasoningDifferenceScenarioList": {
      "base": null,
      "refs": {
        "GuardrailAutomatedReasoningTranslationAmbiguousFinding$differenceScenarios": "<p>Scenarios showing how the different translation options differ in meaning.</p>"
      }
    },
    "GuardrailAutomatedReasoningFinding": {
      "base": "<p>Represents a logical validation result from automated reasoning policy evaluation. The finding indicates whether claims in the input are logically valid, invalid, satisfiable, impossible, or have other logical issues.</p>",
      "refs": {
        "GuardrailAutomatedReasoningFindingList$member": null
      }
    },
    "GuardrailAutomatedReasoningFindingList": {
      "base": null,
      "refs": {
        "GuardrailAutomatedReasoningPolicyAssessment$findings": "<p>List of logical validation results produced by evaluating the input content against automated reasoning policies.</p>"
      }
    },
    "GuardrailAutomatedReasoningImpossibleFinding": {
      "base": "<p>Indicates that no valid claims can be made due to logical contradictions in the premises or rules.</p>",
      "refs": {
        "GuardrailAutomatedReasoningFinding$impossible": "<p>Contains the result when the automated reasoning evaluation determines that no valid logical conclusions can be drawn due to contradictions in the premises or policy rules themselves.</p>"
      }
    },
    "GuardrailAutomatedReasoningInputTextReference": {
      "base": "<p>References a portion of the original input text that corresponds to logical elements.</p>",
      "refs": {
        "GuardrailAutomatedReasoningInputTextReferenceList$member": null
      }
    },
    "GuardrailAutomatedReasoningInputTextReferenceList": {
      "base": null,
      "refs": {
        "GuardrailAutomatedReasoningTranslation$untranslatedPremises": "<p>References to portions of the original input text that correspond to the premises but could not be fully translated.</p>",
        "GuardrailAutomatedReasoningTranslation$untranslatedClaims": "<p>References to portions of the original input text that correspond to the claims but could not be fully translated.</p>"
      }
    },
    "GuardrailAutomatedReasoningInvalidFinding": {
      "base": "<p>Indicates that the claims are logically false and contradictory to the established rules or premises.</p>",
      "refs": {
        "GuardrailAutomatedReasoningFinding$invalid": "<p>Contains the result when the automated reasoning evaluation determines that the claims in the input are logically invalid and contradict the established premises or policy rules.</p>"
      }
    },
    "GuardrailAutomatedReasoningLogicWarning": {
      "base": "<p>Identifies logical issues in the translated statements that exist independent of any policy rules, such as statements that are always true or always false.</p>",
      "refs": {
        "GuardrailAutomatedReasoningImpossibleFinding$logicWarning": "<p>Indication of a logic issue with the translation without needing to consider the automated reasoning policy rules.</p>",
        "GuardrailAutomatedReasoningInvalidFinding$logicWarning": "<p>Indication of a logic issue with the translation without needing to consider the automated reasoning policy rules.</p>",
        "GuardrailAutomatedReasoningSatisfiableFinding$logicWarning": "<p>Indication of a logic issue with the translation without needing to consider the automated reasoning policy rules.</p>",
        "GuardrailAutomatedReasoningValidFinding$logicWarning": "<p>Indication of a logic issue with the translation without needing to consider the automated reasoning policy rules.</p>"
      }
    },
    "GuardrailAutomatedReasoningLogicWarningType": {
      "base": null,
      "refs": {
        "GuardrailAutomatedReasoningLogicWarning$type": "<p>The category of the detected logical issue, such as statements that are always true or always false.</p>"
      }
    },
    "GuardrailAutomatedReasoningNoTranslationsFinding": {
      "base": "<p>Indicates that no relevant logical information could be extracted from the input for validation.</p>",
      "refs": {
        "GuardrailAutomatedReasoningFinding$noTranslations": "<p>Contains the result when the automated reasoning evaluation cannot extract any relevant logical information from the input that can be validated against the policy rules.</p>"
      }
    },
    "GuardrailAutomatedReasoningPoliciesProcessed": {
      "base": null,
      "refs": {
        "GuardrailUsage$automatedReasoningPolicies": "<p>The number of automated reasoning policies that were processed during the guardrail evaluation.</p>"
      }
    },
    "GuardrailAutomatedReasoningPolicyAssessment": {
      "base": "<p>Contains the results of automated reasoning policy evaluation, including logical findings about the validity of claims made in the input content.</p>",
      "refs": {
        "GuardrailAssessment$automatedReasoningPolicy": "<p>The automated reasoning policy assessment results, including logical validation findings for the input content.</p>"
      }
    },
    "GuardrailAutomatedReasoningPolicyUnitsProcessed": {
      "base": null,
      "refs": {
        "GuardrailUsage$automatedReasoningPolicyUnits": "<p>The number of text units processed by the automated reasoning policy.</p>"
      }
    },
    "GuardrailAutomatedReasoningPolicyVersionArn": {
      "base": null,
      "refs": {
        "GuardrailAutomatedReasoningRule$policyVersionArn": "<p>The ARN of the automated reasoning policy version that contains this rule.</p>"
      }
    },
    "GuardrailAutomatedReasoningRule": {
      "base": "<p>References a specific automated reasoning policy rule that was applied during evaluation.</p>",
      "refs": {
        "GuardrailAutomatedReasoningRuleList$member": null
      }
    },
    "GuardrailAutomatedReasoningRuleList": {
      "base": null,
      "refs": {
        "GuardrailAutomatedReasoningImpossibleFinding$contradictingRules": "<p>The automated reasoning policy rules that contradict the claims and/or premises in the input.</p>",
        "GuardrailAutomatedReasoningInvalidFinding$contradictingRules": "<p>The automated reasoning policy rules that contradict the claims in the input.</p>",
        "GuardrailAutomatedReasoningValidFinding$supportingRules": "<p>The automated reasoning policy rules that support why this result is considered valid.</p>"
      }
    },
    "GuardrailAutomatedReasoningSatisfiableFinding": {
      "base": "<p>Indicates that the claims could be either true or false depending on additional assumptions not provided in the input.</p>",
      "refs": {
        "GuardrailAutomatedReasoningFinding$satisfiable": "<p>Contains the result when the automated reasoning evaluation determines that the claims in the input could be either true or false depending on additional assumptions not provided in the input context.</p>"
      }
    },
    "GuardrailAutomatedReasoningScenario": {
      "base": "<p>Represents a logical scenario where claims can be evaluated as true or false, containing specific logical assignments.</p>",
      "refs": {
        "GuardrailAutomatedReasoningDifferenceScenarioList$member": null,
        "GuardrailAutomatedReasoningSatisfiableFinding$claimsTrueScenario": "<p>An example scenario demonstrating how the claims could be logically true.</p>",
        "GuardrailAutomatedReasoningSatisfiableFinding$claimsFalseScenario": "<p>An example scenario demonstrating how the claims could be logically false.</p>",
        "GuardrailAutomatedReasoningValidFinding$claimsTrueScenario": "<p>An example scenario demonstrating how the claims are logically true.</p>"
      }
    },
    "GuardrailAutomatedReasoningStatement": {
      "base": "<p>A logical statement that includes both formal logic representation and natural language explanation.</p>",
      "refs": {
        "GuardrailAutomatedReasoningStatementList$member": null
      }
    },
    "GuardrailAutomatedReasoningStatementList": {
      "base": null,
      "refs": {
        "GuardrailAutomatedReasoningLogicWarning$premises": "<p>The logical statements that serve as premises under which the claims are validated.</p>",
        "GuardrailAutomatedReasoningLogicWarning$claims": "<p>The logical statements that are validated while assuming the policy and premises.</p>",
        "GuardrailAutomatedReasoningScenario$statements": "<p>List of logical assignments and statements that define this scenario.</p>",
        "GuardrailAutomatedReasoningTranslation$premises": "<p>The logical statements that serve as the foundation or assumptions for the claims.</p>",
        "GuardrailAutomatedReasoningTranslation$claims": "<p>The logical statements that are being validated against the premises and policy rules.</p>"
      }
    },
    "GuardrailAutomatedReasoningStatementLogicContent": {
      "base": null,
      "refs": {
        "GuardrailAutomatedReasoningStatement$logic": "<p>The formal logical representation of the statement.</p>"
      }
    },
    "GuardrailAutomatedReasoningStatementNaturalLanguageContent": {
      "base": null,
      "refs": {
        "GuardrailAutomatedReasoningInputTextReference$text": "<p>The specific text from the original input that this reference points to.</p>",
        "GuardrailAutomatedReasoningStatement$naturalLanguage": "<p>The natural language explanation of the logical statement.</p>"
      }
    },
    "GuardrailAutomatedReasoningTooComplexFinding": {
      "base": "<p>Indicates that the input exceeds the processing capacity due to the volume or complexity of the logical information.</p>",
      "refs": {
        "GuardrailAutomatedReasoningFinding$tooComplex": "<p>Contains the result when the automated reasoning evaluation cannot process the input due to its complexity or volume exceeding the system's processing capacity for logical analysis.</p>"
      }
    },
    "GuardrailAutomatedReasoningTranslation": {
      "base": "<p>Contains the logical translation of natural language input into formal logical statements, including premises, claims, and confidence scores.</p>",
      "refs": {
        "GuardrailAutomatedReasoningImpossibleFinding$translation": "<p>The logical translation of the input that this finding evaluates.</p>",
        "GuardrailAutomatedReasoningInvalidFinding$translation": "<p>The logical translation of the input that this finding invalidates.</p>",
        "GuardrailAutomatedReasoningSatisfiableFinding$translation": "<p>The logical translation of the input that this finding evaluates.</p>",
        "GuardrailAutomatedReasoningTranslationList$member": null,
        "GuardrailAutomatedReasoningValidFinding$translation": "<p>The logical translation of the input that this finding validates.</p>"
      }
    },
    "GuardrailAutomatedReasoningTranslationAmbiguousFinding": {
      "base": "<p>Indicates that the input has multiple valid logical interpretations, requiring additional context or clarification.</p>",
      "refs": {
        "GuardrailAutomatedReasoningFinding$translationAmbiguous": "<p>Contains the result when the automated reasoning evaluation detects that the input has multiple valid logical interpretations, requiring additional context or clarification to proceed with validation.</p>"
      }
    },
    "GuardrailAutomatedReasoningTranslationConfidence": {
      "base": null,
      "refs": {
        "GuardrailAutomatedReasoningTranslation$confidence": "<p>A confidence score between 0 and 1 indicating how certain the system is about the logical translation.</p>"
      }
    },
    "GuardrailAutomatedReasoningTranslationList": {
      "base": null,
      "refs": {
        "GuardrailAutomatedReasoningTranslationOption$translations": "<p>Example translations that provide this possible interpretation of the input.</p>"
      }
    },
    "GuardrailAutomatedReasoningTranslationOption": {
      "base": "<p>Represents one possible logical interpretation of ambiguous input content.</p>",
      "refs": {
        "GuardrailAutomatedReasoningTranslationOptionList$member": null
      }
    },
    "GuardrailAutomatedReasoningTranslationOptionList": {
      "base": null,
      "refs": {
        "GuardrailAutomatedReasoningTranslationAmbiguousFinding$options": "<p>Different logical interpretations that were detected during translation of the input.</p>"
      }
    },
    "GuardrailAutomatedReasoningValidFinding": {
      "base": "<p>Indicates that the claims are definitively true and logically implied by the premises, with no possible alternative interpretations.</p>",
      "refs": {
        "GuardrailAutomatedReasoningFinding$valid": "<p>Contains the result when the automated reasoning evaluation determines that the claims in the input are logically valid and definitively true based on the provided premises and policy rules.</p>"
      }
    },
    "GuardrailConfiguration": {
      "base": "<p>Configuration information for a guardrail that you use with the <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html\">Converse</a> operation.</p>",
      "refs": {
        "ConverseRequest$guardrailConfig": "<p>Configuration information for a guardrail that you want to use in the request. If you include <code>guardContent</code> blocks in the <code>content</code> field in the <code>messages</code> field, the guardrail operates only on those messages. If you include no <code>guardContent</code> blocks, the guardrail operates on all messages in the request body and in any included prompt resource.</p>"
      }
    },
    "GuardrailContentBlock": {
      "base": "<p>The content block to be evaluated by the guardrail.</p>",
      "refs": {
        "GuardrailContentBlockList$member": null
      }
    },
    "GuardrailContentBlockList": {
      "base": null,
      "refs": {
        "ApplyGuardrailRequest$content": "<p>The content details used in the request to apply the guardrail.</p>"
      }
    },
    "GuardrailContentFilter": {
      "base": "<p>The content filter for a guardrail.</p>",
      "refs": {
        "GuardrailContentFilterList$member": null
      }
    },
    "GuardrailContentFilterConfidence": {
      "base": null,
      "refs": {
        "GuardrailContentFilter$confidence": "<p>The guardrail confidence.</p>"
      }
    },
    "GuardrailContentFilterList": {
      "base": null,
      "refs": {
        "GuardrailContentPolicyAssessment$filters": "<p>The content policy filters.</p>"
      }
    },
    "GuardrailContentFilterStrength": {
      "base": null,
      "refs": {
        "GuardrailContentFilter$filterStrength": "<p>The filter strength setting for the guardrail content filter.</p>"
      }
    },
    "GuardrailContentFilterType": {
      "base": null,
      "refs": {
        "GuardrailContentFilter$type": "<p>The guardrail type.</p>"
      }
    },
    "GuardrailContentPolicyAction": {
      "base": null,
      "refs": {
        "GuardrailContentFilter$action": "<p>The guardrail action.</p>"
      }
    },
    "GuardrailContentPolicyAssessment": {
      "base": "<p>An assessment of a content policy for a guardrail.</p>",
      "refs": {
        "GuardrailAssessment$contentPolicy": "<p>The content policy.</p>"
      }
    },
    "GuardrailContentPolicyImageUnitsProcessed": {
      "base": null,
      "refs": {
        "GuardrailUsage$contentPolicyImageUnits": "<p>The content policy image units processed by the guardrail.</p>"
      }
    },
    "GuardrailContentPolicyUnitsProcessed": {
      "base": null,
      "refs": {
        "GuardrailUsage$contentPolicyUnits": "<p>The content policy units processed by the guardrail.</p>"
      }
    },
    "GuardrailContentQualifier": {
      "base": null,
      "refs": {
        "GuardrailContentQualifierList$member": null
      }
    },
    "GuardrailContentQualifierList": {
      "base": null,
      "refs": {
        "GuardrailTextBlock$qualifiers": "<p>The qualifiers describing the text block.</p>"
      }
    },
    "GuardrailContentSource": {
      "base": null,
      "refs": {
        "ApplyGuardrailRequest$source": "<p>The source of data used in the request to apply the guardrail.</p>"
      }
    },
    "GuardrailContextualGroundingFilter": {
      "base": "<p>The details for the guardrails contextual grounding filter.</p>",
      "refs": {
        "GuardrailContextualGroundingFilters$member": null
      }
    },
    "GuardrailContextualGroundingFilterScoreDouble": {
      "base": null,
      "refs": {
        "GuardrailContextualGroundingFilter$score": "<p>The score generated by contextual grounding filter.</p>"
      }
    },
    "GuardrailContextualGroundingFilterThresholdDouble": {
      "base": null,
      "refs": {
        "GuardrailContextualGroundingFilter$threshold": "<p>The threshold used by contextual grounding filter to determine whether the content is grounded or not.</p>"
      }
    },
    "GuardrailContextualGroundingFilterType": {
      "base": null,
      "refs": {
        "GuardrailContextualGroundingFilter$type": "<p>The contextual grounding filter type.</p>"
      }
    },
    "GuardrailContextualGroundingFilters": {
      "base": null,
      "refs": {
        "GuardrailContextualGroundingPolicyAssessment$filters": "<p>The filter details for the guardrails contextual grounding filter.</p>"
      }
    },
    "GuardrailContextualGroundingPolicyAction": {
      "base": null,
      "refs": {
        "GuardrailContextualGroundingFilter$action": "<p>The action performed by the guardrails contextual grounding filter.</p>"
      }
    },
    "GuardrailContextualGroundingPolicyAssessment": {
      "base": "<p>The policy assessment details for the guardrails contextual grounding filter.</p>",
      "refs": {
        "GuardrailAssessment$contextualGroundingPolicy": "<p>The contextual grounding policy used for the guardrail assessment.</p>"
      }
    },
    "GuardrailContextualGroundingPolicyUnitsProcessed": {
      "base": null,
      "refs": {
        "GuardrailUsage$contextualGroundingPolicyUnits": "<p>The contextual grounding policy units processed by the guardrail.</p>"
      }
    },
    "GuardrailConverseContentBlock": {
      "base": "<p/> <p>A content block for selective guarding with the <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html\">Converse</a> or <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html\">ConverseStream</a> API operations. </p>",
      "refs": {
        "ContentBlock$guardContent": "<p>Contains the content to assess with the guardrail. If you don't specify <code>guardContent</code> in a call to the Converse API, the guardrail (if passed in the Converse API) assesses the entire message.</p> <p>For more information, see <i>Use a guardrail with the Converse API</i> in the <i>Amazon Bedrock User Guide</i>. </p>",
        "SystemContentBlock$guardContent": "<p>A content block to assess with the guardrail. Use with the <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html\">Converse</a> or <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html\">ConverseStream</a> API operations. </p> <p>For more information, see <i>Use a guardrail with the Converse API</i> in the <i>Amazon Bedrock User Guide</i>.</p>"
      }
    },
    "GuardrailConverseContentQualifier": {
      "base": null,
      "refs": {
        "GuardrailConverseContentQualifierList$member": null
      }
    },
    "GuardrailConverseContentQualifierList": {
      "base": null,
      "refs": {
        "GuardrailConverseTextBlock$qualifiers": "<p>The qualifier details for the guardrails contextual grounding filter.</p>"
      }
    },
    "GuardrailConverseImageBlock": {
      "base": "<p>An image block that contains images that you want to assess with a guardrail.</p>",
      "refs": {
        "GuardrailConverseContentBlock$image": "<p>Image within converse content block to be evaluated by the guardrail.</p>"
      }
    },
    "GuardrailConverseImageFormat": {
      "base": null,
      "refs": {
        "GuardrailConverseImageBlock$format": "<p>The format details for the image type of the guardrail converse image block.</p>"
      }
    },
    "GuardrailConverseImageSource": {
      "base": "<p>The image source (image bytes) of the guardrail converse image source.</p>",
      "refs": {
        "GuardrailConverseImageBlock$source": "<p>The image source (image bytes) of the guardrail converse image block.</p>"
      }
    },
    "GuardrailConverseImageSourceBytesBlob": {
      "base": null,
      "refs": {
        "GuardrailConverseImageSource$bytes": "<p>The raw image bytes for the image.</p>"
      }
    },
    "GuardrailConverseTextBlock": {
      "base": "<p>A text block that contains text that you want to assess with a guardrail. For more information, see <a>GuardrailConverseContentBlock</a>.</p>",
      "refs": {
        "GuardrailConverseContentBlock$text": "<p>The text to guard.</p>"
      }
    },
    "GuardrailCoverage": {
      "base": "<p>The action of the guardrail coverage details.</p>",
      "refs": {
        "ApplyGuardrailResponse$guardrailCoverage": "<p>The guardrail coverage details in the apply guardrail response.</p>",
        "GuardrailInvocationMetrics$guardrailCoverage": "<p>The coverage details for the guardrail invocation metrics.</p>"
      }
    },
    "GuardrailCustomWord": {
      "base": "<p>A custom word configured in a guardrail.</p>",
      "refs": {
        "GuardrailCustomWordList$member": null
      }
    },
    "GuardrailCustomWordList": {
      "base": null,
      "refs": {
        "GuardrailWordPolicyAssessment$customWords": "<p>Custom words in the assessment.</p>"
      }
    },
    "GuardrailIdentifier": {
      "base": null,
      "refs": {
        "ApplyGuardrailRequest$guardrailIdentifier": "<p>The guardrail identifier used in the request to apply the guardrail.</p>",
        "GuardrailConfiguration$guardrailIdentifier": "<p>The identifier for the guardrail.</p>",
        "GuardrailStreamConfiguration$guardrailIdentifier": "<p>The identifier for the guardrail.</p>",
        "InvokeModelRequest$guardrailIdentifier": "<p>The unique identifier of the guardrail that you want to use. If you don't provide a value, no guardrail is applied to the invocation.</p> <p>An error will be thrown in the following situations.</p> <ul> <li> <p>You don't provide a guardrail identifier but you specify the <code>amazon-bedrock-guardrailConfig</code> field in the request body.</p> </li> <li> <p>You enable the guardrail but the <code>contentType</code> isn't <code>application/json</code>.</p> </li> <li> <p>You provide a guardrail identifier, but <code>guardrailVersion</code> isn't specified.</p> </li> </ul>",
        "InvokeModelWithResponseStreamRequest$guardrailIdentifier": "<p>The unique identifier of the guardrail that you want to use. If you don't provide a value, no guardrail is applied to the invocation.</p> <p>An error is thrown in the following situations.</p> <ul> <li> <p>You don't provide a guardrail identifier but you specify the <code>amazon-bedrock-guardrailConfig</code> field in the request body.</p> </li> <li> <p>You enable the guardrail but the <code>contentType</code> isn't <code>application/json</code>.</p> </li> <li> <p>You provide a guardrail identifier, but <code>guardrailVersion</code> isn't specified.</p> </li> </ul>"
      }
    },
    "GuardrailImageBlock": {
      "base": "<p>Contain an image which user wants guarded. This block is accepted by the guardrails independent API.</p>",
      "refs": {
        "GuardrailContentBlock$image": "<p>Image within guardrail content block to be evaluated by the guardrail.</p>"
      }
    },
    "GuardrailImageCoverage": {
      "base": "<p>The details of the guardrail image coverage.</p>",
      "refs": {
        "GuardrailCoverage$images": "<p>The guardrail coverage for images (the number of images that guardrails guarded).</p>"
      }
    },
    "GuardrailImageFormat": {
      "base": null,
      "refs": {
        "GuardrailImageBlock$format": "<p>The format details for the file type of the image blocked by the guardrail.</p>"
      }
    },
    "GuardrailImageSource": {
      "base": "<p>The image source (image bytes) of the guardrail image source. Object used in independent api.</p>",
      "refs": {
        "GuardrailImageBlock$source": "<p>The image source (image bytes) details of the image blocked by the guardrail.</p>"
      }
    },
    "GuardrailImageSourceBytesBlob": {
      "base": null,
      "refs": {
        "GuardrailImageSource$bytes": "<p>The bytes details of the guardrail image source. Object used in independent api.</p>"
      }
    },
    "GuardrailInvocationMetrics": {
      "base": "<p>The invocation metrics for the guardrail.</p>",
      "refs": {
        "GuardrailAssessment$invocationMetrics": "<p>The invocation metrics for the guardrail assessment.</p>"
      }
    },
    "GuardrailManagedWord": {
      "base": "<p>A managed word configured in a guardrail.</p>",
      "refs": {
        "GuardrailManagedWordList$member": null
      }
    },
    "GuardrailManagedWordList": {
      "base": null,
      "refs": {
        "GuardrailWordPolicyAssessment$managedWordLists": "<p>Managed word lists in the assessment.</p>"
      }
    },
    "GuardrailManagedWordType": {
      "base": null,
      "refs": {
        "GuardrailManagedWord$type": "<p>The type for the managed word.</p>"
      }
    },
    "GuardrailOutputContent": {
      "base": "<p>The output content produced by the guardrail.</p>",
      "refs": {
        "GuardrailOutputContentList$member": null
      }
    },
    "GuardrailOutputContentList": {
      "base": null,
      "refs": {
        "ApplyGuardrailResponse$outputs": "<p>The output details in the response from the guardrail.</p>"
      }
    },
    "GuardrailOutputScope": {
      "base": null,
      "refs": {
        "ApplyGuardrailRequest$outputScope": "<p>Specifies the scope of the output that you get in the response. Set to <code>FULL</code> to return the entire output, including any detected and non-detected entries in the response for enhanced debugging.</p> <p>Note that the full output scope doesn't apply to word filters or regex in sensitive information filters. It does apply to all other filtering policies, including sensitive information with filters that can detect personally identifiable information (PII).</p>"
      }
    },
    "GuardrailOutputText": {
      "base": null,
      "refs": {
        "GuardrailOutputContent$text": "<p>The specific text for the output content produced by the guardrail.</p>",
        "ModelOutputs$member": null
      }
    },
    "GuardrailPiiEntityFilter": {
      "base": "<p>A Personally Identifiable Information (PII) entity configured in a guardrail.</p>",
      "refs": {
        "GuardrailPiiEntityFilterList$member": null
      }
    },
    "GuardrailPiiEntityFilterList": {
      "base": null,
      "refs": {
        "GuardrailSensitiveInformationPolicyAssessment$piiEntities": "<p>The PII entities in the assessment.</p>"
      }
    },
    "GuardrailPiiEntityType": {
      "base": null,
      "refs": {
        "GuardrailPiiEntityFilter$type": "<p>The PII entity filter type.</p>"
      }
    },
    "GuardrailProcessingLatency": {
      "base": null,
      "refs": {
        "GuardrailInvocationMetrics$guardrailProcessingLatency": "<p>The processing latency details for the guardrail invocation metrics.</p>"
      }
    },
    "GuardrailRegexFilter": {
      "base": "<p>A Regex filter configured in a guardrail.</p>",
      "refs": {
        "GuardrailRegexFilterList$member": null
      }
    },
    "GuardrailRegexFilterList": {
      "base": null,
      "refs": {
        "GuardrailSensitiveInformationPolicyAssessment$regexes": "<p>The regex queries in the assessment.</p>"
      }
    },
    "GuardrailSensitiveInformationPolicyAction": {
      "base": null,
      "refs": {
        "GuardrailPiiEntityFilter$action": "<p>The PII entity filter action.</p>",
        "GuardrailRegexFilter$action": "<p>The region filter action.</p>"
      }
    },
    "GuardrailSensitiveInformationPolicyAssessment": {
      "base": "<p>The assessment for aPersonally Identifiable Information (PII) policy. </p>",
      "refs": {
        "GuardrailAssessment$sensitiveInformationPolicy": "<p>The sensitive information policy.</p>"
      }
    },
    "GuardrailSensitiveInformationPolicyFreeUnitsProcessed": {
      "base": null,
      "refs": {
        "GuardrailUsage$sensitiveInformationPolicyFreeUnits": "<p>The sensitive information policy free units processed by the guardrail.</p>"
      }
    },
    "GuardrailSensitiveInformationPolicyUnitsProcessed": {
      "base": null,
      "refs": {
        "GuardrailUsage$sensitiveInformationPolicyUnits": "<p>The sensitive information policy units processed by the guardrail.</p>"
      }
    },
    "GuardrailStreamConfiguration": {
      "base": "<p>Configuration information for a guardrail that you use with the <a>ConverseStream</a> action. </p>",
      "refs": {
        "ConverseStreamRequest$guardrailConfig": "<p>Configuration information for a guardrail that you want to use in the request. If you include <code>guardContent</code> blocks in the <code>content</code> field in the <code>messages</code> field, the guardrail operates only on those messages. If you include no <code>guardContent</code> blocks, the guardrail operates on all messages in the request body and in any included prompt resource.</p>"
      }
    },
    "GuardrailStreamProcessingMode": {
      "base": null,
      "refs": {
        "GuardrailStreamConfiguration$streamProcessingMode": "<p>The processing mode. </p> <p>The processing mode. For more information, see <i>Configure streaming response behavior</i> in the <i>Amazon Bedrock User Guide</i>. </p>"
      }
    },
    "GuardrailTextBlock": {
      "base": "<p>The text block to be evaluated by the guardrail.</p>",
      "refs": {
        "GuardrailContentBlock$text": "<p>Text within content block to be evaluated by the guardrail.</p>"
      }
    },
    "GuardrailTextCharactersCoverage": {
      "base": "<p>The guardrail coverage for the text characters.</p>",
      "refs": {
        "GuardrailCoverage$textCharacters": "<p>The text characters of the guardrail coverage details.</p>"
      }
    },
    "GuardrailTopic": {
      "base": "<p>Information about a topic guardrail.</p>",
      "refs": {
        "GuardrailTopicList$member": null
      }
    },
    "GuardrailTopicList": {
      "base": null,
      "refs": {
        "GuardrailTopicPolicyAssessment$topics": "<p>The topics in the assessment.</p>"
      }
    },
    "GuardrailTopicPolicyAction": {
      "base": null,
      "refs": {
        "GuardrailTopic$action": "<p>The action the guardrail should take when it intervenes on a topic.</p>"
      }
    },
    "GuardrailTopicPolicyAssessment": {
      "base": "<p>A behavior assessment of a topic policy.</p>",
      "refs": {
        "GuardrailAssessment$topicPolicy": "<p>The topic policy.</p>"
      }
    },
    "GuardrailTopicPolicyUnitsProcessed": {
      "base": null,
      "refs": {
        "GuardrailUsage$topicPolicyUnits": "<p>The topic policy units processed by the guardrail.</p>"
      }
    },
    "GuardrailTopicType": {
      "base": null,
      "refs": {
        "GuardrailTopic$type": "<p>The type behavior that the guardrail should perform when the model detects the topic.</p>"
      }
    },
    "GuardrailTrace": {
      "base": null,
      "refs": {
        "GuardrailConfiguration$trace": "<p>The trace behavior for the guardrail.</p>",
        "GuardrailStreamConfiguration$trace": "<p>The trace behavior for the guardrail.</p>"
      }
    },
    "GuardrailTraceAssessment": {
      "base": "<p>A Top level guardrail trace object. For more information, see <a>ConverseTrace</a>.</p>",
      "refs": {
        "ConverseStreamTrace$guardrail": "<p>The guardrail trace object. </p>",
        "ConverseTrace$guardrail": "<p>The guardrail trace object. </p>"
      }
    },
    "GuardrailUsage": {
      "base": "<p>The details on the use of the guardrail.</p>",
      "refs": {
        "ApplyGuardrailResponse$usage": "<p>The usage details in the response from the guardrail.</p>",
        "GuardrailInvocationMetrics$usage": "<p>The usage details for the guardrail invocation metrics.</p>"
      }
    },
    "GuardrailVersion": {
      "base": null,
      "refs": {
        "ApplyGuardrailRequest$guardrailVersion": "<p>The guardrail version used in the request to apply the guardrail.</p>",
        "GuardrailConfiguration$guardrailVersion": "<p>The version of the guardrail.</p>",
        "GuardrailStreamConfiguration$guardrailVersion": "<p>The version of the guardrail.</p>",
        "InvokeModelRequest$guardrailVersion": "<p>The version number for the guardrail. The value can also be <code>DRAFT</code>.</p>",
        "InvokeModelWithResponseStreamRequest$guardrailVersion": "<p>The version number for the guardrail. The value can also be <code>DRAFT</code>.</p>"
      }
    },
    "GuardrailWordPolicyAction": {
      "base": null,
      "refs": {
        "GuardrailCustomWord$action": "<p>The action for the custom word.</p>",
        "GuardrailManagedWord$action": "<p>The action for the managed word.</p>"
      }
    },
    "GuardrailWordPolicyAssessment": {
      "base": "<p>The word policy assessment.</p>",
      "refs": {
        "GuardrailAssessment$wordPolicy": "<p>The word policy.</p>"
      }
    },
    "GuardrailWordPolicyUnitsProcessed": {
      "base": null,
      "refs": {
        "GuardrailUsage$wordPolicyUnits": "<p>The word policy units processed by the guardrail.</p>"
      }
    },
    "ImageBlock": {
      "base": "<p>Image content for a message.</p>",
      "refs": {
        "ContentBlock$image": "<p>Image to include in the message. </p> <note> <p>This field is only supported by Anthropic Claude 3 models.</p> </note>",
        "ToolResultContentBlock$image": "<p>A tool result that is an image.</p> <note> <p>This field is only supported by Anthropic Claude 3 models.</p> </note>"
      }
    },
    "ImageFormat": {
      "base": null,
      "refs": {
        "ImageBlock$format": "<p>The format of the image.</p>"
      }
    },
    "ImageSource": {
      "base": "<p>The source for an image.</p>",
      "refs": {
        "ImageBlock$source": "<p>The source for the image.</p>"
      }
    },
    "ImageSourceBytesBlob": {
      "base": null,
      "refs": {
        "ImageSource$bytes": "<p>The raw image bytes for the image. If you use an AWS SDK, you don't need to encode the image bytes in base64.</p>"
      }
    },
    "ImagesGuarded": {
      "base": null,
      "refs": {
        "GuardrailImageCoverage$guarded": "<p>The count (integer) of images guardrails guarded.</p>"
      }
    },
    "ImagesTotal": {
      "base": null,
      "refs": {
        "GuardrailImageCoverage$total": "<p>Represents the total number of images (integer) that were in the request (guarded and unguarded).</p>"
      }
    },
    "InferenceConfiguration": {
      "base": "<p>Base inference parameters to pass to a model in a call to <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html\">Converse</a> or <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html\">ConverseStream</a>. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters for foundation models</a>.</p> <p>If you need to pass additional parameters that the model supports, use the <code>additionalModelRequestFields</code> request field in the call to <code>Converse</code> or <code>ConverseStream</code>. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Model parameters</a>.</p>",
      "refs": {
        "ConverseRequest$inferenceConfig": "<p>Inference parameters to pass to the model. <code>Converse</code> and <code>ConverseStream</code> support a base set of inference parameters. If you need to pass additional parameters that the model supports, use the <code>additionalModelRequestFields</code> request field.</p>",
        "ConverseStreamRequest$inferenceConfig": "<p>Inference parameters to pass to the model. <code>Converse</code> and <code>ConverseStream</code> support a base set of inference parameters. If you need to pass additional parameters that the model supports, use the <code>additionalModelRequestFields</code> request field.</p>"
      }
    },
    "InferenceConfigurationMaxTokensInteger": {
      "base": null,
      "refs": {
        "InferenceConfiguration$maxTokens": "<p>The maximum number of tokens to allow in the generated response. The default value is the maximum allowed value for the model that you are using. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters for foundation models</a>. </p>"
      }
    },
    "InferenceConfigurationStopSequencesList": {
      "base": null,
      "refs": {
        "InferenceConfiguration$stopSequences": "<p>A list of stop sequences. A stop sequence is a sequence of characters that causes the model to stop generating the response. </p>"
      }
    },
    "InferenceConfigurationTemperatureFloat": {
      "base": null,
      "refs": {
        "InferenceConfiguration$temperature": "<p>The likelihood of the model selecting higher-probability options while generating a response. A lower value makes the model more likely to choose higher-probability options, while a higher value makes the model more likely to choose lower-probability options.</p> <p>The default value is the default value for the model that you are using. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters for foundation models</a>. </p>"
      }
    },
    "InferenceConfigurationTopPFloat": {
      "base": null,
      "refs": {
        "InferenceConfiguration$topP": "<p>The percentage of most-likely candidates that the model considers for the next token. For example, if you choose a value of 0.8 for <code>topP</code>, the model selects from the top 80% of the probability distribution of tokens that could be next in the sequence.</p> <p>The default value is the default value for the model that you are using. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters for foundation models</a>. </p>"
      }
    },
    "Integer": {
      "base": null,
      "refs": {
        "CountTokensResponse$inputTokens": "<p>The number of tokens in the provided input according to the specified model's tokenization rules. This count represents the number of input tokens that would be processed if the same input were sent to the model in an inference request. Use this value to estimate costs and ensure your inputs stay within model token limits.</p>"
      }
    },
    "InternalServerException": {
      "base": "<p>An internal server error occurred. For troubleshooting this error, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/troubleshooting-api-error-codes.html#ts-internal-failure\">InternalFailure</a> in the Amazon Bedrock User Guide</p>",
      "refs": {
        "ConverseStreamOutput$internalServerException": "<p>An internal server error occurred. Retry your request.</p>",
        "InvokeModelWithBidirectionalStreamOutput$internalServerException": "<p>The request encountered an unknown internal error.</p>",
        "ResponseStream$internalServerException": "<p>An internal server error occurred. Retry your request.</p>"
      }
    },
    "InvocationArn": {
      "base": null,
      "refs": {
        "AsyncInvokeSummary$invocationArn": "<p>The invocation's ARN.</p>",
        "GetAsyncInvokeRequest$invocationArn": "<p>The invocation's ARN.</p>",
        "GetAsyncInvokeResponse$invocationArn": "<p>The invocation's ARN.</p>",
        "StartAsyncInvokeResponse$invocationArn": "<p>The ARN of the invocation.</p>"
      }
    },
    "InvokeModelIdentifier": {
      "base": null,
      "refs": {
        "InvokeModelRequest$modelId": "<p>The unique identifier of the model to invoke to run inference.</p> <p>The <code>modelId</code> to provide depends on the type of model or throughput that you use:</p> <ul> <li> <p>If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html#model-ids-arns\">Amazon Bedrock base model IDs (on-demand throughput)</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use an inference profile, specify the inference profile ID or its ARN. For a list of inference profile IDs, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/cross-region-inference-support.html\">Supported Regions and models for cross-region inference</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/prov-thru-use.html\">Run inference using a Provisioned Throughput</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use a custom model, specify the ARN of the custom model deployment (for on-demand inference) or the ARN of your provisioned model (for Provisioned Throughput). For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-customization-use.html\">Use a custom model in Amazon Bedrock</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use an <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-customization-import-model.html\">imported model</a>, specify the ARN of the imported model. You can get the model ARN from a successful call to <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_CreateModelImportJob.html\">CreateModelImportJob</a> or from the Imported models page in the Amazon Bedrock console.</p> </li> </ul>",
        "InvokeModelWithBidirectionalStreamRequest$modelId": "<p>The model ID or ARN of the model ID to use. Currently, only <code>amazon.nova-sonic-v1:0</code> is supported.</p>",
        "InvokeModelWithResponseStreamRequest$modelId": "<p>The unique identifier of the model to invoke to run inference.</p> <p>The <code>modelId</code> to provide depends on the type of model or throughput that you use:</p> <ul> <li> <p>If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html#model-ids-arns\">Amazon Bedrock base model IDs (on-demand throughput)</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use an inference profile, specify the inference profile ID or its ARN. For a list of inference profile IDs, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/cross-region-inference-support.html\">Supported Regions and models for cross-region inference</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/prov-thru-use.html\">Run inference using a Provisioned Throughput</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use a custom model, specify the ARN of the custom model deployment (for on-demand inference) or the ARN of your provisioned model (for Provisioned Throughput). For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-customization-use.html\">Use a custom model in Amazon Bedrock</a> in the Amazon Bedrock User Guide.</p> </li> <li> <p>If you use an <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-customization-import-model.html\">imported model</a>, specify the ARN of the imported model. You can get the model ARN from a successful call to <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_CreateModelImportJob.html\">CreateModelImportJob</a> or from the Imported models page in the Amazon Bedrock console.</p> </li> </ul>"
      }
    },
    "InvokeModelRequest": {
      "base": null,
      "refs": {
      }
    },
    "InvokeModelResponse": {
      "base": null,
      "refs": {
      }
    },
    "InvokeModelTokensRequest": {
      "base": "<p>The body of an <code>InvokeModel</code> API request for token counting. This structure mirrors the input format for the <code>InvokeModel</code> operation, allowing you to count tokens for raw text inference requests.</p>",
      "refs": {
        "CountTokensInput$invokeModel": "<p>An <code>InvokeModel</code> request for which to count tokens. Use this field when you want to count tokens for a raw text input that would be sent to the <code>InvokeModel</code> operation.</p>"
      }
    },
    "InvokeModelWithBidirectionalStreamInput": {
      "base": "<p>Payload content, the speech chunk, for the bidirectional input of the invocation step.</p>",
      "refs": {
        "InvokeModelWithBidirectionalStreamRequest$body": "<p>The prompt and inference parameters in the format specified in the <code>BidirectionalInputPayloadPart</code> in the header. You must provide the body in JSON format. To see the format and content of the request and response bodies for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/api-methods-run.html\">Run inference</a> in the Bedrock User Guide.</p>"
      }
    },
    "InvokeModelWithBidirectionalStreamOutput": {
      "base": "<p>Output from the bidirectional stream that was used for model invocation.</p>",
      "refs": {
        "InvokeModelWithBidirectionalStreamResponse$body": "<p>Streaming response from the model in the format specified by the <code>BidirectionalOutputPayloadPart</code> header.</p>"
      }
    },
    "InvokeModelWithBidirectionalStreamRequest": {
      "base": null,
      "refs": {
      }
    },
    "InvokeModelWithBidirectionalStreamResponse": {
      "base": null,
      "refs": {
      }
    },
    "InvokeModelWithResponseStreamRequest": {
      "base": null,
      "refs": {
      }
    },
    "InvokeModelWithResponseStreamResponse": {
      "base": null,
      "refs": {
      }
    },
    "InvokedModelId": {
      "base": null,
      "refs": {
        "PromptRouterTrace$invokedModelId": "<p>The ID of the invoked model.</p>"
      }
    },
    "KmsKeyId": {
      "base": null,
      "refs": {
        "AsyncInvokeS3OutputDataConfig$kmsKeyId": "<p>A KMS encryption key ID.</p>"
      }
    },
    "ListAsyncInvokesRequest": {
      "base": null,
      "refs": {
      }
    },
    "ListAsyncInvokesResponse": {
      "base": null,
      "refs": {
      }
    },
    "Long": {
      "base": null,
      "refs": {
        "ConverseMetrics$latencyMs": "<p>The latency of the call to <code>Converse</code>, in milliseconds. </p>",
        "ConverseStreamMetrics$latencyMs": "<p>The latency for the streaming request, in milliseconds.</p>"
      }
    },
    "MaxResults": {
      "base": null,
      "refs": {
        "ListAsyncInvokesRequest$maxResults": "<p>The maximum number of invocations to return in one page of results.</p>"
      }
    },
    "Message": {
      "base": "<p>A message input, or returned from, a call to <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html\">Converse</a> or <a href=\"https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html\">ConverseStream</a>.</p>",
      "refs": {
        "ConverseOutput$message": "<p>The message that the model generates.</p>",
        "Messages$member": null
      }
    },
    "MessageStartEvent": {
      "base": "<p>The start of a message.</p>",
      "refs": {
        "ConverseStreamOutput$messageStart": "<p>Message start information.</p>"
      }
    },
    "MessageStopEvent": {
      "base": "<p>The stop event for a message.</p>",
      "refs": {
        "ConverseStreamOutput$messageStop": "<p>Message stop information.</p>"
      }
    },
    "Messages": {
      "base": null,
      "refs": {
        "ConverseRequest$messages": "<p>The messages that you want to send to the model.</p>",
        "ConverseStreamRequest$messages": "<p>The messages that you want to send to the model.</p>",
        "ConverseTokensRequest$messages": "<p>An array of messages to count tokens for.</p>"
      }
    },
    "MimeType": {
      "base": null,
      "refs": {
        "InvokeModelRequest$contentType": "<p>The MIME type of the input data in the request. You must specify <code>application/json</code>.</p>",
        "InvokeModelRequest$accept": "<p>The desired MIME type of the inference body in the response. The default value is <code>application/json</code>.</p>",
        "InvokeModelResponse$contentType": "<p>The MIME type of the inference result.</p>",
        "InvokeModelWithResponseStreamRequest$contentType": "<p>The MIME type of the input data in the request. You must specify <code>application/json</code>.</p>",
        "InvokeModelWithResponseStreamRequest$accept": "<p>The desired MIME type of the inference body in the response. The default value is <code>application/json</code>.</p>",
        "InvokeModelWithResponseStreamResponse$contentType": "<p>The MIME type of the inference result.</p>"
      }
    },
    "ModelErrorException": {
      "base": "<p>The request failed due to an error while processing the model.</p>",
      "refs": {
      }
    },
    "ModelInputPayload": {
      "base": null,
      "refs": {
        "StartAsyncInvokeRequest$modelInput": "<p>Input to send to the model.</p>"
      }
    },
    "ModelNotReadyException": {
      "base": "<p>The model specified in the request is not ready to serve inference requests. The AWS SDK will automatically retry the operation up to 5 times. For information about configuring automatic retries, see <a href=\"https://docs.aws.amazon.com/sdkref/latest/guide/feature-retry-behavior.html\">Retry behavior</a> in the <i>AWS SDKs and Tools</i> reference guide.</p>",
      "refs": {
      }
    },
    "ModelOutputs": {
      "base": null,
      "refs": {
        "GuardrailTraceAssessment$modelOutput": "<p>The output from the model.</p>"
      }
    },
    "ModelStreamErrorException": {
      "base": "<p>An error occurred while streaming the response. Retry your request.</p>",
      "refs": {
        "ConverseStreamOutput$modelStreamErrorException": "<p>A streaming error occurred. Retry your request.</p>",
        "InvokeModelWithBidirectionalStreamOutput$modelStreamErrorException": "<p>The request encountered an error with the model stream.</p>",
        "ResponseStream$modelStreamErrorException": "<p>An error occurred while streaming the response. Retry your request.</p>"
      }
    },
    "ModelTimeoutException": {
      "base": "<p>The request took too long to process. Processing time exceeded the model timeout length.</p>",
      "refs": {
        "InvokeModelWithBidirectionalStreamOutput$modelTimeoutException": "<p>The connection was closed because a request was not received within the timeout period.</p>",
        "ResponseStream$modelTimeoutException": "<p>The request took too long to process. Processing time exceeded the model timeout length.</p>"
      }
    },
    "NonBlankString": {
      "base": null,
      "refs": {
        "AccessDeniedException$message": null,
        "ConflictException$message": null,
        "InternalServerException$message": null,
        "ModelErrorException$message": null,
        "ModelErrorException$resourceName": "<p>The resource name.</p>",
        "ModelNotReadyException$message": null,
        "ModelStreamErrorException$message": null,
        "ModelStreamErrorException$originalMessage": "<p>The original message.</p>",
        "ModelTimeoutException$message": null,
        "ResourceNotFoundException$message": null,
        "ServiceQuotaExceededException$message": null,
        "ServiceUnavailableException$message": null,
        "ThrottlingException$message": null,
        "ValidationException$message": null
      }
    },
    "NonEmptyString": {
      "base": null,
      "refs": {
        "InferenceConfigurationStopSequencesList$member": null,
        "SystemContentBlock$text": "<p>A system prompt for the model. </p>",
        "ToolSpecification$description": "<p>The description for the tool.</p>"
      }
    },
    "NonNegativeInteger": {
      "base": null,
      "refs": {
        "ContentBlockDeltaEvent$contentBlockIndex": "<p>The block index for a content block delta event. </p>",
        "ContentBlockStartEvent$contentBlockIndex": "<p>The index for a content block start event.</p>",
        "ContentBlockStopEvent$contentBlockIndex": "<p>The index for a content block.</p>"
      }
    },
    "PaginationToken": {
      "base": null,
      "refs": {
        "ListAsyncInvokesRequest$nextToken": "<p>Specify the pagination token from a previous request to retrieve the next page of results.</p>",
        "ListAsyncInvokesResponse$nextToken": "<p>Specify the pagination token from a previous request to retrieve the next page of results.</p>"
      }
    },
    "PartBody": {
      "base": null,
      "refs": {
        "BidirectionalInputPayloadPart$bytes": "<p>The audio content for the bidirectional input.</p>",
        "BidirectionalOutputPayloadPart$bytes": "<p>The speech output of the bidirectional stream.</p>",
        "PayloadPart$bytes": "<p>Base64-encoded bytes of payload data.</p>"
      }
    },
    "PayloadPart": {
      "base": "<p>Payload content included in the response.</p>",
      "refs": {
        "ResponseStream$chunk": "<p>Content included in the response.</p>"
      }
    },
    "PerformanceConfigLatency": {
      "base": null,
      "refs": {
        "InvokeModelRequest$performanceConfigLatency": "<p>Model performance settings for the request.</p>",
        "InvokeModelResponse$performanceConfigLatency": "<p>Model performance settings for the request.</p>",
        "InvokeModelWithResponseStreamRequest$performanceConfigLatency": "<p>Model performance settings for the request.</p>",
        "InvokeModelWithResponseStreamResponse$performanceConfigLatency": "<p>Model performance settings for the request.</p>",
        "PerformanceConfiguration$latency": "<p>To use a latency-optimized version of the model, set to <code>optimized</code>.</p>"
      }
    },
    "PerformanceConfiguration": {
      "base": "<p>Performance settings for a model.</p>",
      "refs": {
        "ConverseRequest$performanceConfig": "<p>Model performance settings for the request.</p>",
        "ConverseResponse$performanceConfig": "<p>Model performance settings for the request.</p>",
        "ConverseStreamMetadataEvent$performanceConfig": "<p>Model performance configuration metadata for the conversation stream event.</p>",
        "ConverseStreamRequest$performanceConfig": "<p>Model performance settings for the request.</p>"
      }
    },
    "PromptRouterTrace": {
      "base": "<p>A prompt router trace.</p>",
      "refs": {
        "ConverseStreamTrace$promptRouter": "<p>The request's prompt router.</p>",
        "ConverseTrace$promptRouter": "<p>The request's prompt router.</p>"
      }
    },
    "PromptVariableMap": {
      "base": null,
      "refs": {
        "ConverseRequest$promptVariables": "<p>Contains a map of variables in a prompt from Prompt management to objects containing the values to fill in for them when running model invocation. This field is ignored if you don't specify a prompt resource in the <code>modelId</code> field.</p>",
        "ConverseStreamRequest$promptVariables": "<p>Contains a map of variables in a prompt from Prompt management to objects containing the values to fill in for them when running model invocation. This field is ignored if you don't specify a prompt resource in the <code>modelId</code> field.</p>"
      }
    },
    "PromptVariableValues": {
      "base": "<p>Contains a map of variables in a prompt from Prompt management to an object containing the values to fill in for them when running model invocation. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-management-how.html\">How Prompt management works</a>.</p>",
      "refs": {
        "PromptVariableMap$value": null
      }
    },
    "ReasoningContentBlock": {
      "base": "<p>Contains content regarding the reasoning that is carried out by the model with respect to the content in the content block. Reasoning refers to a Chain of Thought (CoT) that the model generates to enhance the accuracy of its final response.</p>",
      "refs": {
        "ContentBlock$reasoningContent": "<p>Contains content regarding the reasoning that is carried out by the model. Reasoning refers to a Chain of Thought (CoT) that the model generates to enhance the accuracy of its final response.</p>"
      }
    },
    "ReasoningContentBlockDelta": {
      "base": "<p>Contains content regarding the reasoning that is carried out by the model with respect to the content in the content block. Reasoning refers to a Chain of Thought (CoT) that the model generates to enhance the accuracy of its final response.</p>",
      "refs": {
        "ContentBlockDelta$reasoningContent": "<p>Contains content regarding the reasoning that is carried out by the model. Reasoning refers to a Chain of Thought (CoT) that the model generates to enhance the accuracy of its final response.</p>"
      }
    },
    "ReasoningTextBlock": {
      "base": "<p>Contains the reasoning that the model used to return the output.</p>",
      "refs": {
        "ReasoningContentBlock$reasoningText": "<p>The reasoning that the model used to return the output.</p>"
      }
    },
    "RequestMetadata": {
      "base": null,
      "refs": {
        "ConverseRequest$requestMetadata": "<p>Key-value pairs that you can use to filter invocation logs.</p>",
        "ConverseStreamRequest$requestMetadata": "<p>Key-value pairs that you can use to filter invocation logs.</p>"
      }
    },
    "RequestMetadataKeyString": {
      "base": null,
      "refs": {
        "RequestMetadata$key": null
      }
    },
    "RequestMetadataValueString": {
      "base": null,
      "refs": {
        "RequestMetadata$value": null
      }
    },
    "ResourceNotFoundException": {
      "base": "<p>The specified resource ARN was not found. For troubleshooting this error, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/troubleshooting-api-error-codes.html#ts-resource-not-found\">ResourceNotFound</a> in the Amazon Bedrock User Guide</p>",
      "refs": {
      }
    },
    "ResponseStream": {
      "base": "<p>Definition of content in the response stream.</p>",
      "refs": {
        "InvokeModelWithResponseStreamResponse$body": "<p>Inference response from the model in the format specified by the <code>contentType</code> header. To see the format and content of this field for different models, refer to <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html\">Inference parameters</a>.</p>"
      }
    },
    "S3Location": {
      "base": "<p>A storage location in an Amazon S3 bucket.</p>",
      "refs": {
        "DocumentSource$s3Location": "<p>The location of a document object in an Amazon S3 bucket. To see which models support S3 uploads, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference-supported-models-features.html\">Supported models and features for Converse</a>.</p>",
        "ImageSource$s3Location": "<p>The location of an image object in an Amazon S3 bucket. To see which models support S3 uploads, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference-supported-models-features.html\">Supported models and features for Converse</a>.</p>",
        "VideoSource$s3Location": "<p>The location of a video object in an Amazon S3 bucket. To see which models support S3 uploads, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference-supported-models-features.html\">Supported models and features for Converse</a>.</p>"
      }
    },
    "S3Uri": {
      "base": null,
      "refs": {
        "AsyncInvokeS3OutputDataConfig$s3Uri": "<p>An object URI starting with <code>s3://</code>.</p>",
        "S3Location$uri": "<p>An object URI starting with <code>s3://</code>.</p>"
      }
    },
    "ServiceQuotaExceededException": {
      "base": "<p>Your request exceeds the service quota for your account. You can view your quotas at <a href=\"https://docs.aws.amazon.com/servicequotas/latest/userguide/gs-request-quota.html\">Viewing service quotas</a>. You can resubmit your request later.</p>",
      "refs": {
      }
    },
    "ServiceUnavailableException": {
      "base": "<p>The service isn't currently available. For troubleshooting this error, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/troubleshooting-api-error-codes.html#ts-service-unavailable\">ServiceUnavailable</a> in the Amazon Bedrock User Guide</p>",
      "refs": {
        "ConverseStreamOutput$serviceUnavailableException": "<p>The service isn't currently available. For troubleshooting this error, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/troubleshooting-api-error-codes.html#ts-service-unavailable\">ServiceUnavailable</a> in the Amazon Bedrock User Guide</p>",
        "InvokeModelWithBidirectionalStreamOutput$serviceUnavailableException": "<p>The request has failed due to a temporary failure of the server.</p>",
        "ResponseStream$serviceUnavailableException": "<p>The service isn't available. Try again later.</p>"
      }
    },
    "SortAsyncInvocationBy": {
      "base": null,
      "refs": {
        "ListAsyncInvokesRequest$sortBy": "<p>How to sort the response.</p>"
      }
    },
    "SortOrder": {
      "base": null,
      "refs": {
        "ListAsyncInvokesRequest$sortOrder": "<p>The sorting order for the response.</p>"
      }
    },
    "SpecificToolChoice": {
      "base": "<p>The model must request a specific tool. For example, <code>{\"tool\" : {\"name\" : \"Your tool name\"}}</code>.</p> <note> <p>This field is only supported by Anthropic Claude 3 models.</p> </note>",
      "refs": {
        "ToolChoice$tool": "<p>The Model must request the specified tool. Only supported by Anthropic Claude 3 models. </p>"
      }
    },
    "StartAsyncInvokeRequest": {
      "base": null,
      "refs": {
      }
    },
    "StartAsyncInvokeResponse": {
      "base": null,
      "refs": {
      }
    },
    "StatusCode": {
      "base": null,
      "refs": {
        "ModelErrorException$originalStatusCode": "<p>The original status code.</p>",
        "ModelStreamErrorException$originalStatusCode": "<p>The original status code.</p>"
      }
    },
    "StopReason": {
      "base": null,
      "refs": {
        "ConverseResponse$stopReason": "<p>The reason why the model stopped generating output.</p>",
        "MessageStopEvent$stopReason": "<p>The reason why the model stopped generating output.</p>"
      }
    },
    "String": {
      "base": null,
      "refs": {
        "ApplyGuardrailResponse$actionReason": "<p>The reason for the action taken when harmful content is detected.</p>",
        "Citation$title": "<p>The title or identifier of the source document being cited.</p>",
        "CitationGeneratedContent$text": "<p>The text content that was generated by the model and is supported by the associated citation.</p>",
        "CitationSourceContent$text": "<p>The text content from the source document that is being cited.</p>",
        "CitationSourceContentDelta$text": "<p>An incremental update to the text content from the source document that is being cited.</p>",
        "CitationsDelta$title": "<p>The title or identifier of the source document being cited.</p>",
        "ContentBlock$text": "<p>Text to include in the message.</p>",
        "ContentBlockDelta$text": "<p>The content text.</p>",
        "DocumentBlock$context": "<p>Contextual information about how the document should be processed or interpreted by the model when generating citations.</p>",
        "DocumentContentBlock$text": "<p>The text content of the document.</p>",
        "DocumentSource$text": "<p>The text content of the document source.</p>",
        "GuardrailAssessmentListMap$key": null,
        "GuardrailAssessmentMap$key": null,
        "GuardrailConverseTextBlock$text": "<p>The text that you want to guard.</p>",
        "GuardrailCustomWord$match": "<p>The match for the custom word.</p>",
        "GuardrailManagedWord$match": "<p>The match for the managed word.</p>",
        "GuardrailPiiEntityFilter$match": "<p>The PII entity filter match.</p>",
        "GuardrailRegexFilter$name": "<p>The regex filter name.</p>",
        "GuardrailRegexFilter$match": "<p>The regesx filter match.</p>",
        "GuardrailRegexFilter$regex": "<p>The regex query.</p>",
        "GuardrailTextBlock$text": "<p>The input text details to be evaluated by the guardrail.</p>",
        "GuardrailTopic$name": "<p>The name for the guardrail.</p>",
        "GuardrailTraceAssessment$actionReason": "<p>Provides the reason for the action taken when harmful content is detected.</p>",
        "PromptVariableMap$key": null,
        "PromptVariableValues$text": "<p>The text value that the variable maps to.</p>",
        "ReasoningContentBlockDelta$text": "<p>The reasoning that the model used to return the output.</p>",
        "ReasoningContentBlockDelta$signature": "<p>A token that verifies that the reasoning text was generated by the model. If you pass a reasoning block back to the API in a multi-turn conversation, include the text and its signature unmodified.</p>",
        "ReasoningTextBlock$text": "<p>The reasoning that the model used to return the output.</p>",
        "ReasoningTextBlock$signature": "<p>A token that verifies that the reasoning text was generated by the model. If you pass a reasoning block back to the API in a multi-turn conversation, include the text and its signature unmodified.</p>",
        "ToolResultContentBlock$text": "<p>A tool result that is text.</p>",
        "ToolUseBlockDelta$input": "<p>The input for a requested tool.</p>"
      }
    },
    "SystemContentBlock": {
      "base": "<p>A system content block.</p>",
      "refs": {
        "SystemContentBlocks$member": null
      }
    },
    "SystemContentBlocks": {
      "base": null,
      "refs": {
        "ConverseRequest$system": "<p>A prompt that provides instructions or context to the model about the task it should perform, or the persona it should adopt during the conversation.</p>",
        "ConverseStreamRequest$system": "<p>A prompt that provides instructions or context to the model about the task it should perform, or the persona it should adopt during the conversation.</p>",
        "ConverseTokensRequest$system": "<p>The system content blocks to count tokens for. System content provides instructions or context to the model about how it should behave or respond. The token count will include any system content provided.</p>"
      }
    },
    "Tag": {
      "base": "<p>A tag.</p>",
      "refs": {
        "TagList$member": null
      }
    },
    "TagKey": {
      "base": null,
      "refs": {
        "Tag$key": "<p>The tag's key.</p>"
      }
    },
    "TagList": {
      "base": null,
      "refs": {
        "StartAsyncInvokeRequest$tags": "<p>Tags to apply to the invocation.</p>"
      }
    },
    "TagValue": {
      "base": null,
      "refs": {
        "Tag$value": "<p>The tag's value.</p>"
      }
    },
    "TextCharactersGuarded": {
      "base": null,
      "refs": {
        "GuardrailTextCharactersCoverage$guarded": "<p>The text characters that were guarded by the guardrail coverage.</p>"
      }
    },
    "TextCharactersTotal": {
      "base": null,
      "refs": {
        "GuardrailTextCharactersCoverage$total": "<p>The total text characters by the guardrail coverage.</p>"
      }
    },
    "ThrottlingException": {
      "base": "<p>Your request was denied due to exceeding the account quotas for <i>Amazon Bedrock</i>. For troubleshooting this error, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/troubleshooting-api-error-codes.html#ts-throttling-exception\">ThrottlingException</a> in the Amazon Bedrock User Guide</p>",
      "refs": {
        "ConverseStreamOutput$throttlingException": "<p>Your request was denied due to exceeding the account quotas for <i>Amazon Bedrock</i>. For troubleshooting this error, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/troubleshooting-api-error-codes.html#ts-throttling-exception\">ThrottlingException</a> in the Amazon Bedrock User Guide</p>",
        "InvokeModelWithBidirectionalStreamOutput$throttlingException": "<p>The request was denied due to request throttling.</p>",
        "ResponseStream$throttlingException": "<p>Your request was throttled because of service-wide limitations. Resubmit your request later or in a different region. You can also purchase <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/prov-throughput.html\">Provisioned Throughput</a> to increase the rate or number of tokens you can process.</p>"
      }
    },
    "Timestamp": {
      "base": null,
      "refs": {
        "AsyncInvokeSummary$submitTime": "<p>When the invocation was submitted.</p>",
        "AsyncInvokeSummary$lastModifiedTime": "<p>When the invocation was last modified.</p>",
        "AsyncInvokeSummary$endTime": "<p>When the invocation ended.</p>",
        "GetAsyncInvokeResponse$submitTime": "<p>When the invocation request was submitted.</p>",
        "GetAsyncInvokeResponse$lastModifiedTime": "<p>The invocation's last modified time.</p>",
        "GetAsyncInvokeResponse$endTime": "<p>When the invocation ended.</p>",
        "ListAsyncInvokesRequest$submitTimeAfter": "<p>Include invocations submitted after this time.</p>",
        "ListAsyncInvokesRequest$submitTimeBefore": "<p>Include invocations submitted before this time.</p>"
      }
    },
    "TokenUsage": {
      "base": "<p>The tokens used in a message API inference call. </p>",
      "refs": {
        "ConverseResponse$usage": "<p>The total number of tokens used in the call to <code>Converse</code>. The total includes the tokens input to the model and the tokens generated by the model.</p>",
        "ConverseStreamMetadataEvent$usage": "<p>Usage information for the conversation stream event.</p>"
      }
    },
    "TokenUsageCacheReadInputTokensInteger": {
      "base": null,
      "refs": {
        "TokenUsage$cacheReadInputTokens": "<p>The number of input tokens read from the cache for the request.</p>"
      }
    },
    "TokenUsageCacheWriteInputTokensInteger": {
      "base": null,
      "refs": {
        "TokenUsage$cacheWriteInputTokens": "<p>The number of input tokens written to the cache for the request.</p>"
      }
    },
    "TokenUsageInputTokensInteger": {
      "base": null,
      "refs": {
        "TokenUsage$inputTokens": "<p>The number of tokens sent in the request to the model.</p>"
      }
    },
    "TokenUsageOutputTokensInteger": {
      "base": null,
      "refs": {
        "TokenUsage$outputTokens": "<p>The number of tokens that the model generated for the request.</p>"
      }
    },
    "TokenUsageTotalTokensInteger": {
      "base": null,
      "refs": {
        "TokenUsage$totalTokens": "<p>The total of input tokens and tokens generated by the model.</p>"
      }
    },
    "Tool": {
      "base": "<p>Information about a tool that you can use with the Converse API. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/tool-use.html\">Tool use (function calling)</a> in the Amazon Bedrock User Guide.</p>",
      "refs": {
        "ToolConfigurationToolsList$member": null
      }
    },
    "ToolChoice": {
      "base": "<p>Determines which tools the model should request in a call to <code>Converse</code> or <code>ConverseStream</code>. <code>ToolChoice</code> is only supported by Anthropic Claude 3 models and by Mistral AI Mistral Large.</p>",
      "refs": {
        "ToolConfiguration$toolChoice": "<p>If supported by model, forces the model to request a tool.</p>"
      }
    },
    "ToolConfiguration": {
      "base": "<p>Configuration information for the tools that you pass to a model. For more information, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/tool-use.html\">Tool use (function calling)</a> in the Amazon Bedrock User Guide.</p>",
      "refs": {
        "ConverseRequest$toolConfig": "<p>Configuration information for the tools that the model can use when generating a response. </p> <p>For information about models that support tool use, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html#conversation-inference-supported-models-features\">Supported models and model features</a>.</p>",
        "ConverseStreamRequest$toolConfig": "<p>Configuration information for the tools that the model can use when generating a response.</p> <p>For information about models that support streaming tool use, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html#conversation-inference-supported-models-features\">Supported models and model features</a>.</p>"
      }
    },
    "ToolConfigurationToolsList": {
      "base": null,
      "refs": {
        "ToolConfiguration$tools": "<p>An array of tools that you want to pass to a model.</p>"
      }
    },
    "ToolInputSchema": {
      "base": "<p>The schema for the tool. The top level schema type must be <code>object</code>. </p>",
      "refs": {
        "ToolSpecification$inputSchema": "<p>The input schema for the tool in JSON format.</p>"
      }
    },
    "ToolName": {
      "base": null,
      "refs": {
        "SpecificToolChoice$name": "<p>The name of the tool that the model must request. </p>",
        "ToolSpecification$name": "<p>The name for the tool.</p>",
        "ToolUseBlock$name": "<p>The name of the tool that the model wants to use.</p>",
        "ToolUseBlockStart$name": "<p>The name of the tool that the model is requesting to use.</p>"
      }
    },
    "ToolResultBlock": {
      "base": "<p>A tool result block that contains the results for a tool request that the model previously made.</p>",
      "refs": {
        "ContentBlock$toolResult": "<p>The result for a tool request that a model makes.</p>"
      }
    },
    "ToolResultContentBlock": {
      "base": "<p>The tool result content block.</p>",
      "refs": {
        "ToolResultContentBlocks$member": null
      }
    },
    "ToolResultContentBlocks": {
      "base": null,
      "refs": {
        "ToolResultBlock$content": "<p>The content for tool result content block.</p>"
      }
    },
    "ToolResultStatus": {
      "base": null,
      "refs": {
        "ToolResultBlock$status": "<p>The status for the tool result content block.</p> <note> <p>This field is only supported Anthropic Claude 3 models.</p> </note>"
      }
    },
    "ToolSpecification": {
      "base": "<p>The specification for the tool.</p>",
      "refs": {
        "Tool$toolSpec": "<p>The specfication for the tool.</p>"
      }
    },
    "ToolUseBlock": {
      "base": "<p>A tool use content block. Contains information about a tool that the model is requesting be run., The model uses the result from the tool to generate a response. </p>",
      "refs": {
        "ContentBlock$toolUse": "<p>Information about a tool use request from a model.</p>"
      }
    },
    "ToolUseBlockDelta": {
      "base": "<p>The delta for a tool use block.</p>",
      "refs": {
        "ContentBlockDelta$toolUse": "<p>Information about a tool that the model is requesting to use.</p>"
      }
    },
    "ToolUseBlockStart": {
      "base": "<p>The start of a tool use block.</p>",
      "refs": {
        "ContentBlockStart$toolUse": "<p>Information about a tool that the model is requesting to use.</p>"
      }
    },
    "ToolUseId": {
      "base": null,
      "refs": {
        "ToolResultBlock$toolUseId": "<p>The ID of the tool request that this is the result for.</p>",
        "ToolUseBlock$toolUseId": "<p>The ID for the tool request.</p>",
        "ToolUseBlockStart$toolUseId": "<p>The ID for the tool request.</p>"
      }
    },
    "Trace": {
      "base": null,
      "refs": {
        "InvokeModelRequest$trace": "<p>Specifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace.</p>",
        "InvokeModelWithResponseStreamRequest$trace": "<p>Specifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace.</p>"
      }
    },
    "ValidationException": {
      "base": "<p>The input fails to satisfy the constraints specified by <i>Amazon Bedrock</i>. For troubleshooting this error, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/troubleshooting-api-error-codes.html#ts-validation-error\">ValidationError</a> in the Amazon Bedrock User Guide</p>",
      "refs": {
        "ConverseStreamOutput$validationException": "<p>The input fails to satisfy the constraints specified by <i>Amazon Bedrock</i>. For troubleshooting this error, see <a href=\"https://docs.aws.amazon.com/bedrock/latest/userguide/troubleshooting-api-error-codes.html#ts-validation-error\">ValidationError</a> in the Amazon Bedrock User Guide</p>",
        "InvokeModelWithBidirectionalStreamOutput$validationException": "<p>The input fails to satisfy the constraints specified by an Amazon Web Services service.</p>",
        "ResponseStream$validationException": "<p>Input validation failed. Check your request parameters and retry the request.</p>"
      }
    },
    "VideoBlock": {
      "base": "<p>A video block.</p>",
      "refs": {
        "ContentBlock$video": "<p>Video to include in the message. </p>",
        "ToolResultContentBlock$video": "<p>A tool result that is video.</p>"
      }
    },
    "VideoFormat": {
      "base": null,
      "refs": {
        "VideoBlock$format": "<p>The block's format.</p>"
      }
    },
    "VideoSource": {
      "base": "<p>A video source. You can upload a smaller video as a base64-encoded string as long as the encoded file is less than 25MB. You can also transfer videos up to 1GB in size from an S3 bucket.</p>",
      "refs": {
        "VideoBlock$source": "<p>The block's source.</p>"
      }
    },
    "VideoSourceBytesBlob": {
      "base": null,
      "refs": {
        "VideoSource$bytes": "<p>Video content encoded in base64.</p>"
      }
    }
  }
}
