AWS DynamoDB¶
The displayed metrics of AWS DynamoDB include throughput capacity units, latency, concurrent connections, and read/write throughput, which reflect the performance and scalability of DynamoDB in handling large-scale data storage and access.
Configuration¶
Install Func¶
It is recommended to enable Guance Integration - Extensions - DataFlux Func (Automata): All prerequisites are automatically installed, please proceed with the script installation.
For self-deploying Func, refer to Self-deploying Func
Install Script¶
Note: Please prepare an Amazon AK that meets the requirements in advance (for simplicity, you can directly grant the global read-only permission
ReadOnlyAccess
)
Automata Script Installation¶
- Log in to the Guance console
- Click on the 【Integration】 menu and select 【Cloud Account Management】
- Click on 【Add Cloud Account】, select 【AWS】, and fill in the required information on the interface. If the cloud account information has been configured before, ignore this step
- Click on 【Test】, and after a successful test, click on 【Save】. If the test fails, please check if the relevant configuration information is correct and retest
- Click on the 【Cloud Account Management】 list to see the added cloud account, click on the corresponding cloud account to enter the details page
- Click on the 【Integration】 button on the cloud account details page, find
AWS DynamoDB
under theNot Installed
list, click on the 【Install】 button, and install it in the pop-up installation interface.
Manual Script Installation¶
-
Log in to the Func console, click on 【Script Market】, enter the Guance script market, and search for:
integration_aws_dynamodb
-
Click on 【Install】, then enter the corresponding parameters: AWS AK ID, AK Secret, and account name.
-
Click on 【Deploy Startup Script】, the system will automatically create the
Startup
script set and automatically configure the corresponding startup scripts. -
After enabling, you can see the corresponding automatic trigger configuration in 「Management / Automatic Trigger Configuration」. Click on 【Execute】 to immediately execute once without waiting for the scheduled time. After a short while, you can check the execution task records and corresponding logs.
Verification¶
- Confirm in 「Management / Automatic Trigger Configuration」 whether the corresponding task has the corresponding automatic trigger configuration, and you can also check the corresponding task records and logs to see if there are any exceptions
- In Guance, check if there is asset information in 「Infrastructure / Custom」
- In Guance, check if there is corresponding monitoring data in 「Metrics」
Metrics¶
After configuring Amazon CloudWatch, the default metric sets are as follows. You can collect more metrics by configuration Amazon CloudWatch Metric Details
ConditionalCheckFailedRequests¶
The number of failed attempts to perform conditional writes.
Metric Name | Description | Unit | Dimensions |
---|---|---|---|
ConditionalCheckFailedRequests_Average | Average number of failed requests | Count | TableName |
ConditionalCheckFailedRequests_Maximum | Maximum number of failed requests | Count | TableName |
ConditionalCheckFailedRequests_Minimum | Minimum number of failed requests | Count | TableName |
ConditionalCheckFailedRequests_SampleCount | Number of failed requests | Count | TableName |
ConditionalCheckFailedRequests_Sum | Total number of failed requests | Count | TableName |
ConsumedReadCapacityUnits¶
The number of read capacity units consumed in a specified time period, which helps track the usage of provisioned throughput.
Metric Name | Description | Unit | Dimensions |
---|---|---|---|
ConsumedReadCapacityUnits_Average | Average read capacity consumed per request | Count | TableName |
ConsumedReadCapacityUnits_Maximum | Maximum read capacity units consumed by any request to the table or index | Count | TableName |
ConsumedReadCapacityUnits_Minimum | Minimum read capacity units consumed by any request to the table or index | Count | TableName |
ConsumedReadCapacityUnits_SampleCount | Number of read requests to DynamoDB, even if no read capacity was consumed | Count | TableName |
ConsumedReadCapacityUnits_Sum | Total read capacity units consumed | Count | TableName |
ConsumedWriteCapacityUnits¶
The number of write capacity units consumed in a specified time period, which helps track the usage of provisioned throughput.
Metric Name | Description | Unit | Dimensions |
---|---|---|---|
ConsumedWriteCapacityUnits_Average | Average write capacity consumed per request | Count | TableName |
ConsumedWriteCapacityUnits_Maximum | Maximum write capacity units consumed by any request to the table or index | Count | TableName |
ConsumedWriteCapacityUnits_Minimum | Minimum write capacity units consumed by any request to the table or index | Count | TableName |
ConsumedWriteCapacityUnits_SampleCount | Number of write requests to DynamoDB, even if no read capacity was consumed | Count | TableName |
ConsumedWriteCapacityUnits_Sum | Total write capacity units consumed | Count | TableName |
Objects¶
The collected AWS DynamoDB object data structure can be seen in 「Infrastructure - Custom」
{
"measurement": "aws_dynamodb",
"tags": {
"RegionId" : "cn-north-1",
"TableArn" : "arn:aws-cn:dynamodb:cn-north-1:",
"TableId" : "0ce8d4f9b35",
"TableName" : "eks-tflock",
"TableStatus" : "ACTIVE",
"name" : "eks-tflock"
},
"fields": {
"AttributeDefinitions" : "[{\"AttributeName\": \"LockID\", \"AttributeType\": \"S\"}]",
"BillingModeSummary" : "{}",
"CreationDateTime" : "2023-03-22T23:39:42.352000+08:00",
"ItemCount" : "1",
"KeySchema" : "[{\"AttributeName\": \"LockID\", \"KeyType\": \"HASH\"}]",
"LocalSecondaryIndexes" : "{}",
"TableSizeBytes" : "96",
"message" : "{instance json info}"
}
}
Note: The fields in
tags
,fields
may change with subsequent updatesNote 1: The value of
tags.name
is the instance ID, used as a unique identifierNote 2:
fields.message
,fields.Endpoint
are JSON serialized strings ```