Billing Logic¶
This article will demonstrate the billing generation and price calculation logic for each billing item in the pay-as-you-go billing framework of Guance products.
Concepts¶
Term | Description |
---|---|
Data Storage | Custom setting of data retention periods for different data types. |
Basic Billing | The unit price of a billing item is a fixed value. |
Tiered Billing | The unit price of a billing item is a dynamic value, which varies depending on the selected data storage strategy for the current data type. |
Billing Cycle¶
The billing cycle of Guance is daily, meaning that usage statistics for the workspace on the same day are settled at midnight the next day, generating a daily bill that is synchronized to the Guance Billing Center. Ultimately, the consumption amount is deducted from the corresponding account based on the actual bound settlement method.
Billing Items¶
Time Series¶
The time-series engine of Guance mainly involves the following basic concepts:
Term | Description |
---|---|
Measurement | Generally used to represent a collection corresponding to a statistical value, conceptually similar to the table concept in relational databases. |
Data Point | In the context of metric data reporting, it refers to a single sample of metric data, analogous to row data in relational databases. |
Time | Timestamp, representing the time when the data point was generated. This can also be understood as the time when DataKit collects a row protocol report for a given metric data point. |
Metrics | Field, generally storing numerical data that changes over time. For example, common metrics in the CPU measurement set such as cpu_total , cpu_use , cpu_use_pencent . |
Tags | Tags, generally storing attribute information that does not change over time. For example, common fields like host , project in the CPU measurement set are tag attributes used to identify the actual object properties of the metrics. |
Example¶
Using the above figure as an example, within the CPU measurement set, there are a total of 6 data points based on a single metric. Each data point has a time field: time
, one metric: cpu_use_pencent
, and two tags: host
, project
. The first and fourth rows of data both indicate the CPU usage rate (cpu_use_pencent
) situation where the host
name is Hangzhou_test1
and the project
belongs to Guance. Similarly, the second and fifth rows represent the CPU usage rate for host
named Ningxia_test1
with the project
belonging to Guance. The third and sixth rows represent the CPU usage rate for host
named Singapore_test1
with the project
belonging to Guance_oversea
.
Based on the time series statistics mentioned above, there are 3 combinations of time series based on the cpu_use_pencent
metric:
"host":"Hangzhou_test1","project":"Guance"
"host":"Ningxia_test1","project":"Guance"
"host":"Singapore_test1","project":"Guance_oversea"
Similarly, to calculate all metrics' time series within the current workspace, simply sum up the number of time series actually counted.
Data collected by DataKit and reported to a specific workspace. Specifically referring to data obtained through DQL with NameSpace M.
Billing Item Statistics
Hourly interval statistics for the number of new time series added within the day.
Cost Calculation Formula: Daily cost = Actual billed quantity/1000 * Unit Price (Apply corresponding unit price according to the data storage strategy mentioned above.)
Logs¶
Any of the following scenarios will generate corresponding log data:
-
Enabling log data collection and reporting;
-
Configuring monitoring tasks, intelligent inspections, SLOs or reporting custom events via OpenAPI;
-
Enabling availability test tasks and reporting test data triggered by self-built test nodes.
Billing Item Statistics
Hourly interval statistics for the number of new log data items added within an hour.
Cost Calculation Formula: Daily cost = Actual billed quantity/1000000 * Unit Price (Apply corresponding unit price according to the data storage strategy mentioned above.)
Warning
Depending on the selected storage type, very large log data will be split into multiple entries for billing:
ES Storage: If log size exceeds 10 KB, the number of billed entries for this log = Round down (log size/10 KB)
SLS Storage: If log size exceeds 2 KB, the number of billed entries for this log = Round down (log size/2 KB)
If a single entry is smaller than the limit, it is still counted as 1 entry.
Data Forwarding¶
Supports forwarding log data to Guance or four other external storage options. Based on data forwarding rules, traffic size is aggregated and billed accordingly.
Note: Data forwarded and saved to Guance retains records.
Billing Item Statistics
Hourly interval statistics for the capacity size of data forwarding within the data storage policy. Default capacity unit: Bytes.
Cost Calculation Formula: Daily cost = Actual billed capacity/1000000000 * Corresponding unit price
Network¶
- Enable eBPF network data collection
Billing Item Statistics
Hourly interval statistics for the number of new HOSTs added within the day.
Cost Calculation Formula: Daily cost = Actual billed quantity * Corresponding unit price
Application Performance Trace¶
- Daily Span data volume statistics within the workspace.
Note: In Guance's new billing adjustment, the larger value between "quantity/10" and trace_id
quantity will be taken as the billing data for the day.
Billing Item Statistics
Hourly interval statistics for the number of new trace_id
added within an hour.
Cost Calculation Formula: Daily cost = Actual billed quantity/1000000 * Corresponding unit price
Application Performance Profile¶
- Enable application performance monitoring Profile data collection
Billing Item Statistics
Hourly interval statistics for the number of new Profile data items added within an hour.
Cost Calculation Formula: Daily cost = Actual billed quantity/10000 * Corresponding unit price
Warning
Profile data mainly consists of two parts: Basic Attribute Data + Profile Analysis Files:
If there are excessively large Profile analysis files, then Profile data will be split into multiple entries for billing.
Profile analysis file data greater than 300 KB, billing quantity = Round down (Profile analysis file size/300 KB)
If the analysis file is smaller than the limit, it is still counted as 1 entry.
User Access PV¶
- Daily statistics on the quantity of Resource, Long Task, Error, Action data generated within the workspace.
Note: In Guance's new billing adjustment, the larger value between "quantity/100" and PV will be taken as the billing data for the day.
Billing Item Statistics
Hourly interval statistics for the number of new PV data items added within an hour.
Cost Calculation Formula: Daily cost = Actual billed quantity/10000 * Unit Price (* Apply corresponding unit price according to the data storage strategy mentioned above *)
Session Replay¶
- Enable session replay collection
Billing Item Statistics
Hourly interval statistics for the number of new Sessions added within the day.
Cost Calculation Formula: Daily cost = Actual billed quantity/1000 * Corresponding unit price
Warning
If there are overly long active Sessions, these Sessions will be split into multiple entries for billing based on time_spent
.
Session time_spent
> 4 hours, billing quantity = Round down (time_spent
/4 hours);
If Session time_spent
is less than the above 4 hours, it is still counted as 1 Session.
Availability Monitoring¶
- Enable availability test tasks and return test results via Guance provided test nodes.
Billing Item Statistics
Hourly interval statistics for the number of new test data items added within an hour.
Cost Calculation Formula: Daily cost = Actual billed quantity/10000 * Corresponding unit price
Warning
Since the current test data is stored in the default log index, DQL queries or statistics need to add the following filter conditions to query test data.
index = ['default'], source = [‘http_dial_testing',‘tcp_dial_testing’,'icmp_dial_testing','websocket_dial_testing']
.
Task Invocations¶
-
Enable monitors, SLOs and other scheduled detection tasks. Among them, monitor mutation detection, range detection, outlier detection, log detection each count as 5 task invocations per detection, while other detection types count as 1 task invocation per detection. Meanwhile, if the detection interval exceeds 15 minutes, the excess part is stacked as 1 additional task invocation every 15 minutes;
-
Intelligent monitoring: Host, log, application intelligent detection counts as 10 task invocations per execution; user access intelligent detection counts as 100 task invocations per execution.
Calculation Example
Monitor invocation counts:
- Normal case calculation example: Assume executing one [mutation detection], then it counts as 5 task invocations.
- Exceeding detection interval calculation example: If the detection interval is 30 minutes, then the excess part adds 1 every 15 minutes. For instance, executing one [outlier detection] counts as 6 task invocations.
- Detection type counting multiple times and exceeding detection interval calculation example: Executing two [range detections] with a stacked detection interval of 60 minutes counts as 13 task invocations (2 detections * 5 + 3 exceeding detection intervals).
Intelligent monitoring invocation counts calculation example: Assume executing one [host intelligent monitoring], then it counts as 10 task invocations.
- Each DataKit/OpenAPI query counts as 1 task invocation;
- Generating metrics each query counts as 1 task invocation;
- Choosing advanced functions provided by Func Center counts as 1 task invocation per query.
Billing Item Statistics
Hourly interval statistics for the number of new task invocations added within an hour.
Cost Calculation Formula: Daily cost = Actual billed quantity/10000 * Corresponding unit price
SMS¶
- Configure SMS notifications in alert strategies
Billing Item Statistics
Hourly interval statistics for the number of new SMS sending instances added within an hour.
Cost Calculation Formula: Daily cost = Actual billed quantity/10 * Unit Price
Billing Example¶
Assume Company A uses Guance to comprehensively observe its IT infrastructure and application systems.
Assume Company A has a total of 10 hosts (each host defaults to 600 daily active timelines), generating 6000 timelines per day, 2 million log data items, 2 million Trace data items, 20 thousand PV data items, and 20 thousand task schedules, using the following data storage strategy:
Billing Item | Metrics (Time Series) | Logs | Application Performance Trace | User Access PV |
---|---|---|---|---|
Data Storage Strategy | 3 Days | 7 Days | 3 Days | 3 Days |
Specific details are as follows:
Billing Item | Daily Billing Quantity | Billing Unit Price | Billing Logic | Daily Billing Cost |
---|---|---|---|---|
Time Series | 6000 Entries | $0.06/1000 Entries | (Actual billed quantity/1000) * Unit Price i.e., (6000 Entries/1000 Entries) * $0.06 |
$0.36 |
Logs | 2 Million Entries | $1.2/Million Entries | (Actual statistical quantity/Billing Unit) * Unit Price i.e., (2 Million/1 Million) * $1.2 |
$2.4 |
Trace | 2 Million Entries | $2/Million Entries | (Actual statistical quantity/Billing Unit) * Unit Price i.e., (2 Million/1 Million) * $2 |
$4 |
PV | 20 Thousand Entries | $0.07/10K Entries | (Actual statistical quantity/Billing Unit) * Unit Price i.e., (20K/10K) * $0.07 |
$0.14 |
Task Scheduling | 20 Thousand Times | $0.1/10K Times | (Actual statistical quantity/Billing Unit) * Unit Price i.e., (20K/10K) * $0.1 |
$0.2 |
Note: Since time series are incrementally billed, changes in the number of time series generated by the company will affect costs.
For more time series quantity estimates, refer to Time Series Example.