Skip to content

Cloud Integration

This document mainly introduces the use of the "Cloud Sync" series script packages in the script market to synchronize and process data related to cloud platforms such as Alibaba Cloud and AWS.

Tip

Always use the latest version of DataFlux Func for operations.

Tip

This script package will continuously add new features. Please keep an eye on this documentation page.

1. Prerequisites

  1. Log in to Guance and register an account.

1.1 If using DataFlux Func (Automata)

All prerequisites are automatically installed. There is no extra preliminary work. Please proceed to script installation.

1.2 If self-deploying Func

# Download DataFlux Func GSE
/bin/bash -c "$(curl -fsSL func.guance.com/download)" -- --for=GSE

# Install DataFlux Func
sudo /bin/bash {installation_directory}/run-portable.sh

1.2.1 GSE Edition vs. Original Edition

The differences between the GSE Edition and the Original Edition are as follows:

Comparison Item GSE Edition Original Edition
Pre-installed Scripts Guance Script Market Scripts:
1. Integration Core
2. Self-built Inspection Core
3. Algorithm Library
4. Toolkit
Automatically updated to the latest version upon each restart.
None
Pre-installed Python Packages Besides the packages DataFlux Func itself depends on:
1. Third-party packages the official script sets depend on
2. Math packages like numpy, pandas
3. Other packages like jinja2, mailer, openpyxl
Only packages DataFlux Func itself depends on
Pre-added Script Market Guance Script Market None
Public Network Access Initialization processing for pre-installed script sets
Requires DataFlux Func itself to have public network access,
otherwise it might fail to start normally
Not required
Tip

If the user has already deployed the Original Edition Func, they can directly re-download and install according to the GSE Edition.

For more information, please refer to: Quick Start

  • After installation is complete, create a new connector. Select the type as Guance, and configure the workspace's API Key ID and API Key into the connector.

2. Script Installation

Here, it is assumed that Alibaba Cloud monitoring data needs to be collected and written to Guance.

Tip

Please prepare the qualified Alibaba Cloud AK in advance (for simplicity, you can directly grant the global read-only permission ReadOnlyAccess).

2.1 Install specific collectors

To synchronize monitoring data for cloud resources, we generally need to install two scripts: one for collecting basic information about the corresponding cloud assets, and one for collecting cloud monitoring information.

If you need to collect corresponding logs, you also need to enable the corresponding log collection script. If you need to collect bills, you need to enable the cloud bill collection script.

Taking Alibaba Cloud ECS collection as an example, go to "Management / Script Market" and click to install the corresponding script packages in order:

  • "Integration (Alibaba Cloud - Cloud Monitor)" (ID: integration_alibabacloud_monitor)
  • "Integration (Alibaba Cloud - ECS)" (ID: integration_alibabacloud_ecs)

After clicking [Install], enter the corresponding parameters: Alibaba Cloud AK, Alibaba Cloud account name.

Click [Deploy Startup Script]. The system will automatically create the Startup script set and automatically configure the corresponding startup scripts.

Furthermore, you can see the corresponding scheduled tasks (old version: automatic trigger configurations) in "Management / Scheduled Tasks (Old: Automatic Trigger Configurations)". Click [Execute] to run it immediately once without waiting for the scheduled time. Wait a moment, and you can check the execution task records and corresponding logs.

2.2 Verify synchronization status

  1. In "Management / Scheduled Tasks (Old: Automatic Trigger Configurations)", confirm whether the corresponding task exists in the scheduled tasks (old: automatic trigger configurations). You can also check the corresponding task records and logs to see if there are any exceptions.
  2. On the Guance platform, check in "Infrastructure / Custom" to see if asset information exists.
  3. On the Guance platform, check in "Metrics" to see if there is corresponding monitoring data.

3. Code Details

The following is a step-by-step explanation of the code in this example.

Actually, all "Integration" type scripts can be implemented using similar methods.

import section

In order to use the scripts provided by the script market normally, after installing the script package, these components need to be introduced via import.

from integration_core__runner import Runner
import integration_alibabacloud_monitor__main as aliyun_monitor

Runner is the actual launcher for all collectors and always needs to be introduced to start the collector. aliyun_monitor is the "Alibaba Cloud - Cloud Monitor" collector required in this example.

Account configuration section

In order to be able to call the cloud platform's API normally, the user also needs to provide the corresponding platform's AK for the collector to use.

account = {
    'ak_id'    : '<Alibaba Cloud AK ID with appropriate permissions>',
    'ak_secret': '<Alibaba Cloud AK Secret with appropriate permissions>',

    'extra_tags': {
        'account_name': 'My Alibaba Cloud Account',
    }
}

Reference for creating Alibaba Cloud AK/SK: Create AccessKey

Besides the most basic ak_id and ak_secret, some cloud platform accounts may require additional content simultaneously, such as AWS needing to configure assume_role_arn, role_session_name, etc., when using an iam role. For details, please refer to Amazon (AWS) Code Example.

Finally, each account also allows adding an extra_tags field, allowing users to uniformly add the same tags to the collected data, making it easy to identify which account the data belongs to in Guance.

Both the Key and Value of extra_tags are strings, the content is unlimited, and multiple Key-Value pairs are supported.

In this example, by configuring { 'account_name': 'My Alibaba Cloud Account' } for extra_tags, all data from this account is tagged with account_name="My Alibaba Cloud Account".

Function definition section

In DataFlux Func, all code must be contained within a function decorated with @DFF.API(...).

@DFF.API('Execute Cloud Asset Synchronization')
def run():
    # Specific code omitted ...

The first parameter of the @DFF.API(...) decorator is the title, and the content is arbitrary.

For integration scripts, they are ultimately run via "Scheduled Tasks (Old: Automatic Trigger Configurations)". Only functions decorated with @DFF.API(...) can be created as "Scheduled Tasks (Old: Automatic Trigger Configurations)".

Collector configuration section

Besides configuring the corresponding cloud platform account, the collector also needs to be configured.

The collector configuration can be found in the specific collector's documentation. This article only provides usage hints here.

Basic Configuration

collector_configs = {
    'targets': [
        {
            'namespace': 'acs_ecs_dashboard', # Cloud Monitor namespace
            'metrics'  : ['*cpu*', '*mem*'],  # Metric data containing cpu, mem in Cloud Monitor
        },
    ],
}
collectors = [
    aliyun_monitor.DataCollector(account, collector_configs),
]

Alibaba Cloud Monitor requires configuration of collection targets. In this example, we specify to only collect metrics related to CPU and memory in ECS.

Advanced Configuration

# Metric filter
def filter_ecs_metric(instance, namespace='acs_ecs_dashboard'):
    '''
    Collect metric data where instance_id is within ['xxxx']
    '''
    # return True
    instance_id = instance['tags'].get('InstanceId')
    if instance_id in ['xxxx']:
        return True
    return False

def after_collect_metric(point):
    '''
    Supplement tags for the collected data
    '''
    if point['tags']['name'] == 'xxx':
        point['tags']['custom_tag'] = 'c1'
    return point

collector_configs = {
    'targets': [
        {
            'namespace': 'acs_ecs_dashboard', # Cloud Monitor namespace
            'metrics'  : ['*cpu*', '*mem*'],  # Metric data containing cpu, mem in Cloud Monitor
        },
    ],
}
collectors = [
    aliyun_monitor.DataCollector(account, collector_configs, filters=filter_ecs_metric, after_collect=after_collect_metric)),
]
  • filters: Filter function. Filters the collected data (not every collector supports filters, please check the specific collector documentation for "Configure Filters"). After defining the filter conditions, the function returns True if the condition is met and needs to be collected, and returns False if the condition is met but does not need to be collected. Please configure it flexibly according to your business.
  • after_collect: Custom after_collect function for secondary processing of the collected data. Usage scenarios: log data splitting, adding extra fields to field/tags, etc. Note: The return value of this function exists as the data to be reported. It is recommended that you only modify the incoming point or add a series of points according to the original point structure. If you return empty or False and other logically negative data, it means that all points collected by this collector will not be reported.

Finally, you need to use the account configuration from above and the collector configuration here to generate specific "collector instances".

Startup and execution section

The operation of the collector requires a unified Runner launcher to run.

The launcher needs to be initialized using the specific "collector instances" generated above and call the run() function to start running.

The launcher will traverse all incoming collectors and sequentially report the collected data to DataKit (the default DataKit connector ID is datakit).

Runner(collectors).run()

After writing the code, if you are unsure whether the configuration is correct, you can add the debug=True parameter to the launcher to run it in debug mode.

The launcher running in debug mode will perform data collection operations normally but will not write to DataKit in the end, as follows:

Runner(collectors, debug=True).run()

If the DataKit connector ID to be written to is not the default datakit, then you can add datakit_id="<DataKit ID>" to the launcher to specify the DataKit connector ID, as follows:

Runner(collectors, datakit_id='<DataKit ID>').run()

4. Code Reference for Other Cloud Vendors

The configuration methods for other cloud vendors are similar to Alibaba Cloud.

Amazon (AWS)

Taking collecting "EC2 instance objects" and "EC2-related monitoring metrics" as an example:

from integration_core__runner import Runner
import integration_aws_ec2__main as aws_ec2
import integration_aws_cloudwatch__main as aws_cloudwatch

# Account configuration
# AWS supports collecting resources by assuming an IAM role
# If you need to use a role, please configure: assume_role_arn, role_session_name
# If multi-factor authentication (MFA) is enabled, please configure: serial_number, token_code
account = {
    'ak_id'            : '<AWS AK ID with appropriate permissions>',
    'ak_secret'        : '<AWS AK Secret with appropriate permissions>',
    'assume_role_arn'  : '<Resource Name (ARN) of the role to assume>',
    'role_session_name': '<Role session name>',
    'serial_number'    : '<MFA device identifier>',
    'token_code'       : '<One-time code provided by the MFA device, optional>',
    'extra_tags': {
        'account_name': 'My AWS Account',
    }
}

@DFF.API('Execute Cloud Asset Synchronization')
def run():
    regions = ['cn-northwest-1']

    # Collector configuration
    ec2_configs = {
        'regions': regions,
    }
    cloudwatch_configs = {
        'regions': regions,
        'targets': [
            {
                'namespace': 'AWS/EC2',
                'metrics'  : ['*cpu*'],
            },
        ],
    }
    collectors = [
        aws_ec2.DataCollector(account, ec2_configs),
        aws_cloudwatch.DataCollector(account, cloudwatch_configs),
    ]

    # Start execution
    Runner(collectors).run()

Tencent Cloud

Taking collecting "CVM instance objects" and "CVM-related monitoring metrics" as an example:

from integration_core__runner import Runner
import integration_tencentcloud_cvm__main as tencentcloud_cvm
import integration_tencentcloud_monitor__main as tencentcloud_monitor

# Account configuration
account = {
    'ak_id'    : '<Tencent Cloud Secret ID with appropriate permissions>',
    'ak_secret': '<Tencent Cloud Secret Key with appropriate permissions>',

    'extra_tags': {
        'account_name': 'My Tencent Cloud Account',
    }
}

@DFF.API('Execute Cloud Asset Synchronization')
def run():
    regions = ['ap-shanghai']

    # Collector configuration
    cvm_configs = {
        'regions': regions,
    }
    monitor_configs = {
        'regions': regions,
        'targets': [
            {
                'namespace': 'QCE/CVM',
                'metrics'  : ['*cpu*'],
            },
        ],
    }
    collectors = [
        tencentcloud_cvm.DataCollector(account, cvm_configs),
        tencentcloud_monitor.DataCollector(account, monitor_configs),
    ]

    # Start execution
    Runner(collectors).run()

Microsoft Azure

Taking collecting "VM instance objects" and "VM-related monitoring metrics" as an example:

from integration_core__runner import Runner
import integration_azure_vm__main as vm_main
import integration_azure_monitor__main as monitor_main

# Account configuration
account = {
    "client_id"     : "<Azure Client Id>",
    "client_secret" : "<Azure Client Secret>",
    "tenant_id"     : "<Azure Tenant Id>",
    "authority_area": "<Azure Area, Default global>",
    "extra_tags": {
        "account_name": "<Your Account Name>",
    }
}

subscriptions = "<Azure Subscriptions (Multiple needs to be separated by  ',')>"
subscriptions = subscriptions.split(',')

# Collector configuration
collector_configs = {
    'subscriptions': subscriptions,
}

monitor_configs = {
    'targets': [
        {
            'namespace': 'Microsoft.Compute/virtualMachines',
            'metrics'  : [
                'CPU*'
            ],
        },
    ],
}

@DFF.API('Execute Microsoft Azure VM Resource Collection')
def run():
    collectors = [
        vm_main.DataCollector(account, collector_configs),
        monitor_main.DataCollector(account, monitor_configs),
    ]

    Runner(collectors).run()

Microsoft Azure account parameter hints:

  • client_id: Tenant ID
  • client_secret: Application Registration Client ID
  • tenant_id: Client secret value, note it's not the ID
  • authority_area: Region, including global (Global, Overseas), china (China, 21Vianet), etc. Optional parameter, defaults to global

For obtaining Client Id, Client Secret, Tenant Id, please refer to the Azure documentation: Authenticate Python apps hosted on-premises to Azure resources

Feedback

Is this page helpful? ×