Skip to content

Postgres assets package

The Postgres assets package crawls PostgreSQL assets and publishes them to Atlan for discovery.

Direct extraction

Will create a new connection

This should only be used to create the workflow the first time. Each time you run this method it will create a new connection and new assets within that connection — which could lead to duplicate assets if you run the workflow this way multiple times with the same settings.

Instead, when you want to re-crawl assets, re-run the existing workflow (see Re-run existing workflow below).

1.0.0 2.2.0

To crawl assets directly from PostgreSQL using basic authentication:

Direct extraction from PostgreSQL
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
Workflow postgres = PostgreSQLCrawler.directBasicAuth( // (1)
                "production", // (2)
                "postgres.x9f0ve2k1kvy.ap-south-1.rds.amazonaws.com", // (3)
                5432, // (4)
                "postgres", // (5)
                "nCkM685ZH9g4fVICMs6H", // (6)
                "demo_db", // (7)
                List.of(RoleCache.getIdForName("$admin")), // (8)
                null,
                null,
                true, // (9)
                true, // (10)
                10000L, // (11)
                Map.of("demo_db", List.of("demo")), // (12)
                null); // (13)
WorkflowResponse response = postgres.run(); // (14)
  1. The PostgreSQLCrawler package will create a workflow to crawl assets from PostgreSQL. The directBasicAuth() method creates a workflow for crawling assets directly from PostgreSQL.
  2. You must provide a name for the connection that the PostgreSQL assets will exist within.
  3. You must provide the hostname of your PostgreSQL instance.
  4. You must specify the port number of the PostgreSQL instance (use 5432 for the default).
  5. You must provide your PostgreSQL username.
  6. You must provide your PostgreSQL password.
  7. You must specify the name of the PostgreSQL database you want to crawl.
  8. You must specify at least one connection admin, either:

    • everyone in a role (in this example, all $admin users)
    • a list of groups (names) that will be connection admins
    • a list of users (names) that will be connection admins
  9. You can specify whether you want to allow queries to this connection (true, as in this example) or deny all query access to the connection (false).

  10. You can specify whether you want to allow data previews on this connection (true, as in this example) or deny all sample data previews to the connection (false).
  11. You can specify a maximum number of rows that can be accessed for any asset in the connection.
  12. You can also optionally specify the set of assets to include in crawling. For PostgreSQL assets, this should be specified as a map keyed by database name with values as a list of schemas within that database to crawl. (If set to null, all databases and schemas will be crawled.)
  13. You can also optionally specify the list of assets to exclude from crawling. For PostgreSQL assets, this should be specified as a map keyed by database name with values as a list of schemas within the database to exclude. (If set to null, no assets will be excluded.)
  14. You can then run the workflow using the run() method on the object you've created.

    Workflows run asynchronously

    Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed.

Direct extraction from PostgreSQL using basic auth
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
from pyatlan.client.atlan import AtlanClient
from pyatlan.cache.role_cache import RoleCache
from pyatlan.model.packages import PostgresCrawler

client = AtlanClient()

crawler = (
    PostgresCrawler( # (1)
        connection_name="production", # (2)
        admin_roles=[RoleCache.get_id_for_name("$admin")], # (3)
        admin_groups=None,
        admin_users=None,
        row_limit=10000, # (4)
        allow_query=True, # (5)
        allow_query_preview=True, # (6)
    )
    .direct(hostname="test.com", database="test-db") # (7)
    .basic_auth(  # (8)
        username="test-user",
        password="test-password",
    )
    .include(assets={"test-include": ["test-asset-1", "test-asset-2"]})  # (9)
    .exclude(assets=None)  # (10)
    .exclude_regex(regex=".*_TEST")  # (11)
    .source_level_filtering(enable=True)  # (12)
    .jdbc_internal_methods(enable=True)  # (13)
    .to_workflow() # (14)
)
response = client.workflow.run(crawler) # (15)
  1. Base configuration for a new PostgresCrawler crawler.
  2. You must provide a name for the connection that the PostgreSQL assets will exist within.
  3. You must specify at least one connection admin, either:

    • everyone in a role (in this example, all $admin users).
    • a list of groups (names) that will be connection admins.
    • a list of users (names) that will be connection admins.
  4. You can specify a maximum number of rows that can be accessed for any asset in the connection.

  5. You can specify whether you want to allow queries to this connection. (True, as in this example) or deny all query access to the connection (False).
  6. You can specify whether you want to allow data previews on this connection (True, as in this example) or deny all sample data previews to the connection (False).
  7. You can specify the hostname of your Postgres instance and database name for direct extraction.
  8. When using basic_auth(), you need to provide the following information:

    • username through which to access PostgreSQL.
    • password through which to access PostgreSQL.
  9. You can also optionally specify the set of assets to include in crawling. For Postgres assets, this should be specified as a dict keyed by database name with each value being a list of schemas to include. (If set to None, all table will be crawled.)

  10. You can also optionally specify the list of assets to exclude from crawling. For Postgres assets, this should be specified as a dict keyed by database name with each value being a list of schemas to exclude. (If set to None, no table will be excluded.)
  11. You can also optionally specify the exclude regex for crawler ignore tables and views based on a naming convention.
  12. You can also optionally specify whether to enable (True) or disable (False) schema level filtering on source, schemas selected in the include filter will be fetched.
  13. You can also optionally specify whether to enable (True) or disable (False) JDBC internal methods for data extraction.
  14. Now, you can convert the package into a Workflow object.
  15. Run the workflow by invoking the run() method on the workflow client, passing the created object.

    Workflows run asynchronously

    Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed.

Create the workflow via UI only

We recommend creating the workflow only via the UI. To rerun an existing workflow, see the steps below.

IAM user authentication

Will create a new connection

This should only be used to create the workflow the first time. Each time you run this method it will create a new connection and new assets within that connection — which could lead to duplicate assets if you run the workflow this way multiple times with the same settings.

Instead, when you want to re-crawl assets, re-run the existing workflow (see Re-run existing workflow below).

2.2.0

To crawl assets directly from PostgreSQL using IAM user authentication:

Coming soon

PostgreSQL assets crawling using IAM user authentication
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
from pyatlan.client.atlan import AtlanClient
from pyatlan.cache.role_cache import RoleCache
from pyatlan.model.packages import PostgresCrawler

client = AtlanClient()

crawler = (
    PostgresCrawler( # (1)
        connection_name="production", # (2)
        admin_roles=[RoleCache.get_id_for_name("$admin")], # (3)
        admin_groups=None,
        admin_users=None,
        row_limit=10000, # (4)
        allow_query=True, # (5)
        allow_query_preview=True, # (6)
    )
    .direct(hostname="test.com", database="test-db") # (7)
    .iam_user_auth( # (8)
        username="test-user",
        access_key="test-access-key",
        secret_key="test-secret-key",
    )
    .include(assets={"test-include": ["test-asset-1", "test-asset-2"]})  # (9)
    .exclude(assets=None)  # (10)
    .exclude_regex(regex=".*_TEST")  # (11)
    .source_level_filtering(enable=True)  # (12)
    .jdbc_internal_methods(enable=True)  # (13)
    .to_workflow() # (14)
)
response = client.workflow.run(crawler) # (15)
  1. Base configuration for a new PostgresCrawler crawler.
  2. You must provide a name for the connection that the PostgreSQL assets will exist within.
  3. You must specify at least one connection admin, either:

    • everyone in a role (in this example, all $admin users).
    • a list of groups (names) that will be connection admins.
    • a list of users (names) that will be connection admins.
  4. You can specify a maximum number of rows that can be accessed for any asset in the connection.

  5. You can specify whether you want to allow queries to this connection. (True, as in this example) or deny all query access to the connection (False).
  6. You can specify whether you want to allow data previews on this connection (True, as in this example) or deny all sample data previews to the connection (False).
  7. You can specify the hostname of your Postgres instance and database name for direct extraction.
  8. When using iam_user_auth(), you need to provide the following information:

    • database username to extract from.
    • access key through which to access PostgreSQL.
    • secret key through which to access PostgreSQL.
  9. You can also optionally specify the set of assets to include in crawling. For Postgres assets, this should be specified as a dict keyed by database name with each value being a list of schemas to include. (If set to None, all table will be crawled.)

  10. You can also optionally specify the list of assets to exclude from crawling. For Postgres assets, this should be specified as a dict keyed by database name with each value being a list of schemas to exclude. (If set to None, no table will be excluded.)
  11. You can also optionally specify the exclude regex for crawler ignore tables and views based on a naming convention.
  12. You can also optionally specify whether to enable (True) or disable (False) schema level filtering on source, schemas selected in the include filter will be fetched.
  13. You can also optionally specify whether to enable (True) or disable (False) JDBC internal methods for data extraction.
  14. Now, you can convert the package into a Workflow object.
  15. Run the workflow by invoking the run() method on the workflow client, passing the created object.

    Workflows run asynchronously

    Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed.

Create the workflow via UI only

We recommend creating the workflow only via the UI. To rerun an existing workflow, see the steps below.

IAM role authentication

Will create a new connection

This should only be used to create the workflow the first time. Each time you run this method it will create a new connection and new assets within that connection — which could lead to duplicate assets if you run the workflow this way multiple times with the same settings.

Instead, when you want to re-crawl assets, re-run the existing workflow (see Re-run existing workflow below).

2.2.0

To crawl assets directly from PostgreSQL using IAM role authentication:

Coming soon

PostgreSQL assets crawling using IAM role authentication
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
from pyatlan.client.atlan import AtlanClient
from pyatlan.cache.role_cache import RoleCache
from pyatlan.model.packages import PostgresCrawler

client = AtlanClient()

crawler = (
    PostgresCrawler( # (1)
        connection_name="production", # (2)
        admin_roles=[RoleCache.get_id_for_name("$admin")], # (3)
        admin_groups=None,
        admin_users=None,
        row_limit=10000, # (4)
        allow_query=True, # (5)
        allow_query_preview=True, # (6)
    )
    .direct(hostname="test.com", database="test-db") # (7)
    .iam_role_auth( # (8)
        username="test-user",
        access_key="test-access-key",
        secret_key="test-secret-key",
    )
    .include(assets={"test-include": ["test-asset-1", "test-asset-2"]})  # (9)
    .exclude(assets=None)  # (10)
    .exclude_regex(regex=".*_TEST")  # (11)
    .source_level_filtering(enable=True)  # (12)
    .jdbc_internal_methods(enable=True)  # (13)
    .to_workflow() # (14)
)
response = client.workflow.run(crawler) # (15)
  1. Base configuration for a new PostgresCrawler crawler.
  2. You must provide a name for the connection that the PostgreSQL assets will exist within.
  3. You must specify at least one connection admin, either:

    • everyone in a role (in this example, all $admin users).
    • a list of groups (names) that will be connection admins.
    • a list of users (names) that will be connection admins.
  4. You can specify a maximum number of rows that can be accessed for any asset in the connection.

  5. You can specify whether you want to allow queries to this connection. (True, as in this example) or deny all query access to the connection (False).
  6. You can specify whether you want to allow data previews on this connection (True, as in this example) or deny all sample data previews to the connection (False).
  7. You can specify the hostname of your Postgres instance and database name for direct extraction.
  8. When using iam_role_auth(), you need to provide the following information:

    • database username to extract from.
    • ARN of the AWS role.
    • AWS external ID.
  9. You can also optionally specify the set of assets to include in crawling. For Postgres assets, this should be specified as a dict keyed by database name with each value being a list of schemas to include. (If set to None, all table will be crawled.)

  10. You can also optionally specify the list of assets to exclude from crawling. For Postgres assets, this should be specified as a dict keyed by database name with each value being a list of schemas to exclude. (If set to None, no table will be excluded.)
  11. You can also optionally specify the exclude regex for crawler ignore tables and views based on a naming convention.
  12. You can also optionally specify whether to enable (True) or disable (False) schema level filtering on source, schemas selected in the include filter will be fetched.
  13. You can also optionally specify whether to enable (True) or disable (False) JDBC internal methods for data extraction.
  14. Now, you can convert the package into a Workflow object.
  15. Run the workflow by invoking the run() method on the workflow client, passing the created object.

    Workflows run asynchronously

    Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed.

Create the workflow via UI only

We recommend creating the workflow only via the UI. To rerun an existing workflow, see the steps below.

Offline extraction

Will create a new connection

This should only be used to create the workflow the first time. Each time you run this method it will create a new connection and new assets within that connection — which could lead to duplicate assets if you run the workflow this way multiple times with the same settings.

Instead, when you want to re-crawl assets, re-run the existing workflow (see Re-run existing workflow below).

2.2.0

To crawl PostgeSQL assets from the S3 bucket:

Coming soon

Crawling PostgreSQL assets from a bucket
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
from pyatlan.client.atlan import AtlanClient
from pyatlan.cache.role_cache import RoleCache
from pyatlan.model.packages import PostgresCrawler

client = AtlanClient()

crawler = (
    PostgresCrawler( # (1)
        connection_name="production", # (2)
        admin_roles=[RoleCache.get_id_for_name("$admin")], # (3)
        admin_groups=None,
        admin_users=None,
        row_limit=10000, # (4)
        allow_query=True, # (5)
        allow_query_preview=True, # (6)
    )
    .s3( # (7)
        bucket_name="test-bucket",
        bucket_prefix="test-prefix",
        bucket_region="test-region",
    )
    .include(assets={"test-include": ["test-asset-1", "test-asset-2"]})  # (8)
    .exclude(assets=None)  # (9)
    .exclude_regex(regex=".*_TEST")  # (10)
    .source_level_filtering(enable=True)  # (11)
    .jdbc_internal_methods(enable=True)  # (12)
    .to_workflow() # (13)
)
response = client.workflow.run(crawler) # (14)
  1. Base configuration for a new PostgresCrawler crawler.
  2. You must provide a name for the connection that the PostgreSQL assets will exist within.
  3. You must specify at least one connection admin, either:

    • everyone in a role (in this example, all $admin users).
    • a list of groups (names) that will be connection admins.
    • a list of users (names) that will be connection admins.
  4. You can specify a maximum number of rows that can be accessed for any asset in the connection.

  5. You can specify whether you want to allow queries to this connection. (True, as in this example) or deny all query access to the connection (False).
  6. You can specify whether you want to allow data previews on this connection (True, as in this example) or deny all sample data previews to the connection (False).
  7. When using s3(), you need to provide the following information:

    • name of the bucket/storage that contains the extracted metadata files.
    • prefix is everything after the bucket/storage name, including the path.
    • (Optional) name of the region if applicable.
  8. You can also optionally specify the set of assets to include in crawling. For Postgres assets, this should be specified as a dict keyed by database name with each value being a list of schemas to include. (If set to None, all table will be crawled.)

  9. You can also optionally specify the list of assets to exclude from crawling. For Postgres assets, this should be specified as a dict keyed by database name with each value being a list of schemas to exclude. (If set to None, no table will be excluded.)
  10. You can also optionally specify the exclude regex for crawler ignore tables and views based on a naming convention.
  11. You can also optionally specify whether to enable (True) or disable (False) schema level filtering on source, schemas selected in the include filter will be fetched.
  12. You can also optionally specify whether to enable (True) or disable (False) JDBC internal methods for data extraction.
  13. Now, you can convert the package into a Workflow object.
  14. Run the workflow by invoking the run() method on the workflow client, passing the created object.

    Workflows run asynchronously

    Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed.

Create the workflow via UI only

We recommend creating the workflow only via the UI. To rerun an existing workflow, see the steps below.

Re-run existing workflow

1.10.6

To re-run an existing workflow for PostgreSQL assets:

Re-run existing PostgreSQL workflow
1
2
3
4
List<WorkflowSearchResult> existing = WorkflowSearchRequest // (1)
            .findByType(PostgreSQLCrawler.PREFIX, 5); // (2)
// Determine which of the results is the PostgreSQL workflow you want to re-run...
WorkflowRunResponse response = existing.get(n).rerun(); // (3)
  1. You can search for existing workflows through the WorkflowSearchRequest class.
  2. You can find workflows by their type using the findByType() helper method and providing the prefix for one of the packages. In this example, we do so for the PostgreSQLCrawler. (You can also specify the maximum number of resulting workflows you want to retrieve as results.)
  3. Once you've found the workflow you want to re-run, you can simply call the rerun() helper method on the workflow search result. The WorkflowRunResponse is just a subtype of WorkflowResponse so has the same helper method to monitor progress of the workflow run.

    • Optionally, you can use the rerun(true) method with idempotency to avoid re-running a workflow that is already in running or in a pending state. This will return details of the already running workflow if found, and by default, it is set to false

    Workflows run asynchronously

    Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed.

Re-run existing DynamoDB workflow
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
from pyatlan.client.atlan import AtlanClient
from pyatlan.model.enums import WorkflowPackage

client = AtlanClient()

existing = client.workflow.find_by_type(  # (1)
  prefix=WorkflowPackage.POSTGRES, max_results=5
)

# Determine which DynamoDB workflow (n)
# from the list of results you want to re-run.
response = client.workflow.rerun(existing[n]) # (2)
  1. You can find workflows by their type using the workflow client find_by_type() method and providing the prefix for one of the packages. In this example, we do so for the PostgreSQLCrawler. (You can also specify the maximum number of resulting workflows you want to retrieve as results.)
  2. Once you've found the workflow you want to re-run, you can simply call the workflow client rerun() method.

    • Optionally, you can use rerun(idempotent=True) to avoid re-running a workflow that is already in running or in a pending state. This will return details of the already running workflow if found, and by default, it is set to False.

    Workflows run asynchronously

    Remember that workflows run asynchronously. See the packages and workflows introduction for details on how you can check the status and wait until the workflow has been completed.

Requires multiple steps through the raw REST API

  1. Find the existing workflow.
  2. Send through the resulting re-run request.
POST /api/service/workflows/indexsearch
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
{
  "from": 0,
  "size": 5,
  "query": {
    "bool": {
      "filter": [
        {
          "nested": {
            "path": "metadata",
            "query": {
              "prefix": {
                "metadata.name.keyword": {
                  "value": "atlan-postgres" // (1)
                }
              }
            }
          }
        }
      ]
    }
  },
  "sort": [
    {
      "metadata.creationTimestamp": {
        "nested": {
          "path": "metadata"
        },
        "order": "desc"
      }
    }
  ],
  "track_total_hits": true
}
  1. Searching by the atlan-postgres prefix will ensure you only find existing PostgreSQL assets workflows.

    Name of the workflow

    The name of the workflow will be nested within the _source.metadata.name property of the response object. (Remember since this is a search, there could be multiple results, so you may want to use the other details in each result to determine which workflow you really want.)

POST /api/service/workflows/submit
100
101
102
103
104
{
  "namespace": "default",
  "resourceKind": "WorkflowTemplate",
  "resourceName": "atlan-postgres-1684500411" // (1)
}
  1. Send the name of the workflow as the resourceName to rerun it.