Viewing docs for Databricks v1.90.0
published on Thursday, Mar 19, 2026 by Pulumi
published on Thursday, Mar 19, 2026 by Pulumi
Viewing docs for Databricks v1.90.0
published on Thursday, Mar 19, 2026 by Pulumi
published on Thursday, Mar 19, 2026 by Pulumi
Using getFeatureEngineeringKafkaConfig
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getFeatureEngineeringKafkaConfig(args: GetFeatureEngineeringKafkaConfigArgs, opts?: InvokeOptions): Promise<GetFeatureEngineeringKafkaConfigResult>
function getFeatureEngineeringKafkaConfigOutput(args: GetFeatureEngineeringKafkaConfigOutputArgs, opts?: InvokeOptions): Output<GetFeatureEngineeringKafkaConfigResult>def get_feature_engineering_kafka_config(name: Optional[str] = None,
provider_config: Optional[GetFeatureEngineeringKafkaConfigProviderConfig] = None,
opts: Optional[InvokeOptions] = None) -> GetFeatureEngineeringKafkaConfigResult
def get_feature_engineering_kafka_config_output(name: Optional[pulumi.Input[str]] = None,
provider_config: Optional[pulumi.Input[GetFeatureEngineeringKafkaConfigProviderConfigArgs]] = None,
opts: Optional[InvokeOptions] = None) -> Output[GetFeatureEngineeringKafkaConfigResult]func LookupFeatureEngineeringKafkaConfig(ctx *Context, args *LookupFeatureEngineeringKafkaConfigArgs, opts ...InvokeOption) (*LookupFeatureEngineeringKafkaConfigResult, error)
func LookupFeatureEngineeringKafkaConfigOutput(ctx *Context, args *LookupFeatureEngineeringKafkaConfigOutputArgs, opts ...InvokeOption) LookupFeatureEngineeringKafkaConfigResultOutput> Note: This function is named LookupFeatureEngineeringKafkaConfig in the Go SDK.
public static class GetFeatureEngineeringKafkaConfig
{
public static Task<GetFeatureEngineeringKafkaConfigResult> InvokeAsync(GetFeatureEngineeringKafkaConfigArgs args, InvokeOptions? opts = null)
public static Output<GetFeatureEngineeringKafkaConfigResult> Invoke(GetFeatureEngineeringKafkaConfigInvokeArgs args, InvokeOptions? opts = null)
}public static CompletableFuture<GetFeatureEngineeringKafkaConfigResult> getFeatureEngineeringKafkaConfig(GetFeatureEngineeringKafkaConfigArgs args, InvokeOptions options)
public static Output<GetFeatureEngineeringKafkaConfigResult> getFeatureEngineeringKafkaConfig(GetFeatureEngineeringKafkaConfigArgs args, InvokeOptions options)
fn::invoke:
function: databricks:index/getFeatureEngineeringKafkaConfig:getFeatureEngineeringKafkaConfig
arguments:
# arguments dictionaryThe following arguments are supported:
- Name string
- Name that uniquely identifies this Kafka config within the metastore. This will be the identifier used from the Feature object to reference these configs for a feature. Can be distinct from topic name
- Provider
Config GetFeature Engineering Kafka Config Provider Config - Configure the provider for management through account provider.
- Name string
- Name that uniquely identifies this Kafka config within the metastore. This will be the identifier used from the Feature object to reference these configs for a feature. Can be distinct from topic name
- Provider
Config GetFeature Engineering Kafka Config Provider Config - Configure the provider for management through account provider.
- name String
- Name that uniquely identifies this Kafka config within the metastore. This will be the identifier used from the Feature object to reference these configs for a feature. Can be distinct from topic name
- provider
Config GetFeature Engineering Kafka Config Provider Config - Configure the provider for management through account provider.
- name string
- Name that uniquely identifies this Kafka config within the metastore. This will be the identifier used from the Feature object to reference these configs for a feature. Can be distinct from topic name
- provider
Config GetFeature Engineering Kafka Config Provider Config - Configure the provider for management through account provider.
- name str
- Name that uniquely identifies this Kafka config within the metastore. This will be the identifier used from the Feature object to reference these configs for a feature. Can be distinct from topic name
- provider_
config GetFeature Engineering Kafka Config Provider Config - Configure the provider for management through account provider.
- name String
- Name that uniquely identifies this Kafka config within the metastore. This will be the identifier used from the Feature object to reference these configs for a feature. Can be distinct from topic name
- provider
Config Property Map - Configure the provider for management through account provider.
getFeatureEngineeringKafkaConfig Result
The following output properties are available:
- Auth
Config GetFeature Engineering Kafka Config Auth Config - (AuthConfig) - Authentication configuration for connection to topics
- Backfill
Source GetFeature Engineering Kafka Config Backfill Source - (BackfillSource) - A user-provided and managed source for backfilling data. Historical data is used when creating a training set from streaming features linked to this Kafka config. In the future, a separate table will be maintained by Databricks for forward filling data. The schema for this source must match exactly that of the key and value schemas specified for this Kafka config
- Bootstrap
Servers string - (string) - A comma-separated list of host/port pairs pointing to Kafka cluster
- Extra
Options Dictionary<string, string> - (object) - Catch-all for miscellaneous options. Keys should be source options or Kafka consumer options (kafka.*)
- Id string
- The provider-assigned unique ID for this managed resource.
- Key
Schema GetFeature Engineering Kafka Config Key Schema - (SchemaConfig) - Schema configuration for extracting message keys from topics. At least one of key_schema and value_schema must be provided
- Name string
- (string) - Name that uniquely identifies this Kafka config within the metastore. This will be the identifier used from the Feature object to reference these configs for a feature. Can be distinct from topic name
- Subscription
Mode GetFeature Engineering Kafka Config Subscription Mode - (SubscriptionMode) - Options to configure which Kafka topics to pull data from
- Value
Schema GetFeature Engineering Kafka Config Value Schema - (SchemaConfig) - Schema configuration for extracting message values from topics. At least one of key_schema and value_schema must be provided
- Provider
Config GetFeature Engineering Kafka Config Provider Config
- Auth
Config GetFeature Engineering Kafka Config Auth Config - (AuthConfig) - Authentication configuration for connection to topics
- Backfill
Source GetFeature Engineering Kafka Config Backfill Source - (BackfillSource) - A user-provided and managed source for backfilling data. Historical data is used when creating a training set from streaming features linked to this Kafka config. In the future, a separate table will be maintained by Databricks for forward filling data. The schema for this source must match exactly that of the key and value schemas specified for this Kafka config
- Bootstrap
Servers string - (string) - A comma-separated list of host/port pairs pointing to Kafka cluster
- Extra
Options map[string]string - (object) - Catch-all for miscellaneous options. Keys should be source options or Kafka consumer options (kafka.*)
- Id string
- The provider-assigned unique ID for this managed resource.
- Key
Schema GetFeature Engineering Kafka Config Key Schema - (SchemaConfig) - Schema configuration for extracting message keys from topics. At least one of key_schema and value_schema must be provided
- Name string
- (string) - Name that uniquely identifies this Kafka config within the metastore. This will be the identifier used from the Feature object to reference these configs for a feature. Can be distinct from topic name
- Subscription
Mode GetFeature Engineering Kafka Config Subscription Mode - (SubscriptionMode) - Options to configure which Kafka topics to pull data from
- Value
Schema GetFeature Engineering Kafka Config Value Schema - (SchemaConfig) - Schema configuration for extracting message values from topics. At least one of key_schema and value_schema must be provided
- Provider
Config GetFeature Engineering Kafka Config Provider Config
- auth
Config GetFeature Engineering Kafka Config Auth Config - (AuthConfig) - Authentication configuration for connection to topics
- backfill
Source GetFeature Engineering Kafka Config Backfill Source - (BackfillSource) - A user-provided and managed source for backfilling data. Historical data is used when creating a training set from streaming features linked to this Kafka config. In the future, a separate table will be maintained by Databricks for forward filling data. The schema for this source must match exactly that of the key and value schemas specified for this Kafka config
- bootstrap
Servers String - (string) - A comma-separated list of host/port pairs pointing to Kafka cluster
- extra
Options Map<String,String> - (object) - Catch-all for miscellaneous options. Keys should be source options or Kafka consumer options (kafka.*)
- id String
- The provider-assigned unique ID for this managed resource.
- key
Schema GetFeature Engineering Kafka Config Key Schema - (SchemaConfig) - Schema configuration for extracting message keys from topics. At least one of key_schema and value_schema must be provided
- name String
- (string) - Name that uniquely identifies this Kafka config within the metastore. This will be the identifier used from the Feature object to reference these configs for a feature. Can be distinct from topic name
- subscription
Mode GetFeature Engineering Kafka Config Subscription Mode - (SubscriptionMode) - Options to configure which Kafka topics to pull data from
- value
Schema GetFeature Engineering Kafka Config Value Schema - (SchemaConfig) - Schema configuration for extracting message values from topics. At least one of key_schema and value_schema must be provided
- provider
Config GetFeature Engineering Kafka Config Provider Config
- auth
Config GetFeature Engineering Kafka Config Auth Config - (AuthConfig) - Authentication configuration for connection to topics
- backfill
Source GetFeature Engineering Kafka Config Backfill Source - (BackfillSource) - A user-provided and managed source for backfilling data. Historical data is used when creating a training set from streaming features linked to this Kafka config. In the future, a separate table will be maintained by Databricks for forward filling data. The schema for this source must match exactly that of the key and value schemas specified for this Kafka config
- bootstrap
Servers string - (string) - A comma-separated list of host/port pairs pointing to Kafka cluster
- extra
Options {[key: string]: string} - (object) - Catch-all for miscellaneous options. Keys should be source options or Kafka consumer options (kafka.*)
- id string
- The provider-assigned unique ID for this managed resource.
- key
Schema GetFeature Engineering Kafka Config Key Schema - (SchemaConfig) - Schema configuration for extracting message keys from topics. At least one of key_schema and value_schema must be provided
- name string
- (string) - Name that uniquely identifies this Kafka config within the metastore. This will be the identifier used from the Feature object to reference these configs for a feature. Can be distinct from topic name
- subscription
Mode GetFeature Engineering Kafka Config Subscription Mode - (SubscriptionMode) - Options to configure which Kafka topics to pull data from
- value
Schema GetFeature Engineering Kafka Config Value Schema - (SchemaConfig) - Schema configuration for extracting message values from topics. At least one of key_schema and value_schema must be provided
- provider
Config GetFeature Engineering Kafka Config Provider Config
- auth_
config GetFeature Engineering Kafka Config Auth Config - (AuthConfig) - Authentication configuration for connection to topics
- backfill_
source GetFeature Engineering Kafka Config Backfill Source - (BackfillSource) - A user-provided and managed source for backfilling data. Historical data is used when creating a training set from streaming features linked to this Kafka config. In the future, a separate table will be maintained by Databricks for forward filling data. The schema for this source must match exactly that of the key and value schemas specified for this Kafka config
- bootstrap_
servers str - (string) - A comma-separated list of host/port pairs pointing to Kafka cluster
- extra_
options Mapping[str, str] - (object) - Catch-all for miscellaneous options. Keys should be source options or Kafka consumer options (kafka.*)
- id str
- The provider-assigned unique ID for this managed resource.
- key_
schema GetFeature Engineering Kafka Config Key Schema - (SchemaConfig) - Schema configuration for extracting message keys from topics. At least one of key_schema and value_schema must be provided
- name str
- (string) - Name that uniquely identifies this Kafka config within the metastore. This will be the identifier used from the Feature object to reference these configs for a feature. Can be distinct from topic name
- subscription_
mode GetFeature Engineering Kafka Config Subscription Mode - (SubscriptionMode) - Options to configure which Kafka topics to pull data from
- value_
schema GetFeature Engineering Kafka Config Value Schema - (SchemaConfig) - Schema configuration for extracting message values from topics. At least one of key_schema and value_schema must be provided
- provider_
config GetFeature Engineering Kafka Config Provider Config
- auth
Config Property Map - (AuthConfig) - Authentication configuration for connection to topics
- backfill
Source Property Map - (BackfillSource) - A user-provided and managed source for backfilling data. Historical data is used when creating a training set from streaming features linked to this Kafka config. In the future, a separate table will be maintained by Databricks for forward filling data. The schema for this source must match exactly that of the key and value schemas specified for this Kafka config
- bootstrap
Servers String - (string) - A comma-separated list of host/port pairs pointing to Kafka cluster
- extra
Options Map<String> - (object) - Catch-all for miscellaneous options. Keys should be source options or Kafka consumer options (kafka.*)
- id String
- The provider-assigned unique ID for this managed resource.
- key
Schema Property Map - (SchemaConfig) - Schema configuration for extracting message keys from topics. At least one of key_schema and value_schema must be provided
- name String
- (string) - Name that uniquely identifies this Kafka config within the metastore. This will be the identifier used from the Feature object to reference these configs for a feature. Can be distinct from topic name
- subscription
Mode Property Map - (SubscriptionMode) - Options to configure which Kafka topics to pull data from
- value
Schema Property Map - (SchemaConfig) - Schema configuration for extracting message values from topics. At least one of key_schema and value_schema must be provided
- provider
Config Property Map
Supporting Types
GetFeatureEngineeringKafkaConfigAuthConfig
- Uc
Service stringCredential Name - (string) - Name of the Unity Catalog service credential. This value will be set under the option databricks.serviceCredential
- Uc
Service stringCredential Name - (string) - Name of the Unity Catalog service credential. This value will be set under the option databricks.serviceCredential
- uc
Service StringCredential Name - (string) - Name of the Unity Catalog service credential. This value will be set under the option databricks.serviceCredential
- uc
Service stringCredential Name - (string) - Name of the Unity Catalog service credential. This value will be set under the option databricks.serviceCredential
- uc_
service_ strcredential_ name - (string) - Name of the Unity Catalog service credential. This value will be set under the option databricks.serviceCredential
- uc
Service StringCredential Name - (string) - Name of the Unity Catalog service credential. This value will be set under the option databricks.serviceCredential
GetFeatureEngineeringKafkaConfigBackfillSource
- Delta
Table GetSource Feature Engineering Kafka Config Backfill Source Delta Table Source - (DeltaTableSource) - The Delta table source containing the historic data to backfill. Only the delta table name is used for backfill, the entity columns and timeseries column are ignored as they are defined by the associated KafkaSource
- Delta
Table GetSource Feature Engineering Kafka Config Backfill Source Delta Table Source - (DeltaTableSource) - The Delta table source containing the historic data to backfill. Only the delta table name is used for backfill, the entity columns and timeseries column are ignored as they are defined by the associated KafkaSource
- delta
Table GetSource Feature Engineering Kafka Config Backfill Source Delta Table Source - (DeltaTableSource) - The Delta table source containing the historic data to backfill. Only the delta table name is used for backfill, the entity columns and timeseries column are ignored as they are defined by the associated KafkaSource
- delta
Table GetSource Feature Engineering Kafka Config Backfill Source Delta Table Source - (DeltaTableSource) - The Delta table source containing the historic data to backfill. Only the delta table name is used for backfill, the entity columns and timeseries column are ignored as they are defined by the associated KafkaSource
- delta_
table_ Getsource Feature Engineering Kafka Config Backfill Source Delta Table Source - (DeltaTableSource) - The Delta table source containing the historic data to backfill. Only the delta table name is used for backfill, the entity columns and timeseries column are ignored as they are defined by the associated KafkaSource
- delta
Table Property MapSource - (DeltaTableSource) - The Delta table source containing the historic data to backfill. Only the delta table name is used for backfill, the entity columns and timeseries column are ignored as they are defined by the associated KafkaSource
GetFeatureEngineeringKafkaConfigBackfillSourceDeltaTableSource
- Full
Name string - (string) - The full three-part (catalog, schema, table) name of the Delta table
- Dataframe
Schema string - (string) - Schema of the resulting dataframe after transformations, in Spark StructType JSON format (from df.schema.json()). Required if transformation_sql is specified. Example: {"type":"struct","fields":[{"name":<span pulumi-lang-nodejs=""colA"" pulumi-lang-dotnet=""ColA"" pulumi-lang-go=""colA"" pulumi-lang-python=""col_a"" pulumi-lang-yaml=""colA"" pulumi-lang-java=""colA"">"col_a","type":"integer","nullable":true,"metadata":{}},{"name":<span pulumi-lang-nodejs=""colC"" pulumi-lang-dotnet=""ColC"" pulumi-lang-go=""colC"" pulumi-lang-python=""col_c"" pulumi-lang-yaml=""colC"" pulumi-lang-java=""colC"">"col_c","type":"integer","nullable":true,"metadata":{}}]}
- Entity
Columns List<string> - (list of string, deprecated) - Deprecated: Use Feature.entity instead. Kept for backwards compatibility. The entity columns of the Delta table
- Filter
Condition string - (string) - Single WHERE clause to filter delta table before applying transformations. Will be row-wise evaluated, so should only include conditionals and projections
- Timeseries
Column string - (string, deprecated) - Deprecated: Use Feature.timeseries_column instead. Kept for backwards compatibility. The timeseries column of the Delta table
- Transformation
Sql string - (string) - A single SQL SELECT expression applied after filter_condition.
Should contains all the columns needed (eg. "SELECT , col_a + col_b AS col_c FROM x.y.z WHERE col_a > 0" would have
transformation_sql", col_a + col_b AS<span pulumi-lang-nodejs=" colC"" pulumi-lang-dotnet=" ColC"" pulumi-lang-go=" colC"" pulumi-lang-python=" col_c"" pulumi-lang-yaml=" colC"" pulumi-lang-java=" colC""> col_c") If transformation_sql is not provided, all columns of the delta table are present in the DataSource dataframe
- Full
Name string - (string) - The full three-part (catalog, schema, table) name of the Delta table
- Dataframe
Schema string - (string) - Schema of the resulting dataframe after transformations, in Spark StructType JSON format (from df.schema.json()). Required if transformation_sql is specified. Example: {"type":"struct","fields":[{"name":<span pulumi-lang-nodejs=""colA"" pulumi-lang-dotnet=""ColA"" pulumi-lang-go=""colA"" pulumi-lang-python=""col_a"" pulumi-lang-yaml=""colA"" pulumi-lang-java=""colA"">"col_a","type":"integer","nullable":true,"metadata":{}},{"name":<span pulumi-lang-nodejs=""colC"" pulumi-lang-dotnet=""ColC"" pulumi-lang-go=""colC"" pulumi-lang-python=""col_c"" pulumi-lang-yaml=""colC"" pulumi-lang-java=""colC"">"col_c","type":"integer","nullable":true,"metadata":{}}]}
- Entity
Columns []string - (list of string, deprecated) - Deprecated: Use Feature.entity instead. Kept for backwards compatibility. The entity columns of the Delta table
- Filter
Condition string - (string) - Single WHERE clause to filter delta table before applying transformations. Will be row-wise evaluated, so should only include conditionals and projections
- Timeseries
Column string - (string, deprecated) - Deprecated: Use Feature.timeseries_column instead. Kept for backwards compatibility. The timeseries column of the Delta table
- Transformation
Sql string - (string) - A single SQL SELECT expression applied after filter_condition.
Should contains all the columns needed (eg. "SELECT , col_a + col_b AS col_c FROM x.y.z WHERE col_a > 0" would have
transformation_sql", col_a + col_b AS<span pulumi-lang-nodejs=" colC"" pulumi-lang-dotnet=" ColC"" pulumi-lang-go=" colC"" pulumi-lang-python=" col_c"" pulumi-lang-yaml=" colC"" pulumi-lang-java=" colC""> col_c") If transformation_sql is not provided, all columns of the delta table are present in the DataSource dataframe
- full
Name String - (string) - The full three-part (catalog, schema, table) name of the Delta table
- dataframe
Schema String - (string) - Schema of the resulting dataframe after transformations, in Spark StructType JSON format (from df.schema.json()). Required if transformation_sql is specified. Example: {"type":"struct","fields":[{"name":<span pulumi-lang-nodejs=""colA"" pulumi-lang-dotnet=""ColA"" pulumi-lang-go=""colA"" pulumi-lang-python=""col_a"" pulumi-lang-yaml=""colA"" pulumi-lang-java=""colA"">"col_a","type":"integer","nullable":true,"metadata":{}},{"name":<span pulumi-lang-nodejs=""colC"" pulumi-lang-dotnet=""ColC"" pulumi-lang-go=""colC"" pulumi-lang-python=""col_c"" pulumi-lang-yaml=""colC"" pulumi-lang-java=""colC"">"col_c","type":"integer","nullable":true,"metadata":{}}]}
- entity
Columns List<String> - (list of string, deprecated) - Deprecated: Use Feature.entity instead. Kept for backwards compatibility. The entity columns of the Delta table
- filter
Condition String - (string) - Single WHERE clause to filter delta table before applying transformations. Will be row-wise evaluated, so should only include conditionals and projections
- timeseries
Column String - (string, deprecated) - Deprecated: Use Feature.timeseries_column instead. Kept for backwards compatibility. The timeseries column of the Delta table
- transformation
Sql String - (string) - A single SQL SELECT expression applied after filter_condition.
Should contains all the columns needed (eg. "SELECT , col_a + col_b AS col_c FROM x.y.z WHERE col_a > 0" would have
transformation_sql", col_a + col_b AS<span pulumi-lang-nodejs=" colC"" pulumi-lang-dotnet=" ColC"" pulumi-lang-go=" colC"" pulumi-lang-python=" col_c"" pulumi-lang-yaml=" colC"" pulumi-lang-java=" colC""> col_c") If transformation_sql is not provided, all columns of the delta table are present in the DataSource dataframe
- full
Name string - (string) - The full three-part (catalog, schema, table) name of the Delta table
- dataframe
Schema string - (string) - Schema of the resulting dataframe after transformations, in Spark StructType JSON format (from df.schema.json()). Required if transformation_sql is specified. Example: {"type":"struct","fields":[{"name":<span pulumi-lang-nodejs=""colA"" pulumi-lang-dotnet=""ColA"" pulumi-lang-go=""colA"" pulumi-lang-python=""col_a"" pulumi-lang-yaml=""colA"" pulumi-lang-java=""colA"">"col_a","type":"integer","nullable":true,"metadata":{}},{"name":<span pulumi-lang-nodejs=""colC"" pulumi-lang-dotnet=""ColC"" pulumi-lang-go=""colC"" pulumi-lang-python=""col_c"" pulumi-lang-yaml=""colC"" pulumi-lang-java=""colC"">"col_c","type":"integer","nullable":true,"metadata":{}}]}
- entity
Columns string[] - (list of string, deprecated) - Deprecated: Use Feature.entity instead. Kept for backwards compatibility. The entity columns of the Delta table
- filter
Condition string - (string) - Single WHERE clause to filter delta table before applying transformations. Will be row-wise evaluated, so should only include conditionals and projections
- timeseries
Column string - (string, deprecated) - Deprecated: Use Feature.timeseries_column instead. Kept for backwards compatibility. The timeseries column of the Delta table
- transformation
Sql string - (string) - A single SQL SELECT expression applied after filter_condition.
Should contains all the columns needed (eg. "SELECT , col_a + col_b AS col_c FROM x.y.z WHERE col_a > 0" would have
transformation_sql", col_a + col_b AS<span pulumi-lang-nodejs=" colC"" pulumi-lang-dotnet=" ColC"" pulumi-lang-go=" colC"" pulumi-lang-python=" col_c"" pulumi-lang-yaml=" colC"" pulumi-lang-java=" colC""> col_c") If transformation_sql is not provided, all columns of the delta table are present in the DataSource dataframe
- full_
name str - (string) - The full three-part (catalog, schema, table) name of the Delta table
- dataframe_
schema str - (string) - Schema of the resulting dataframe after transformations, in Spark StructType JSON format (from df.schema.json()). Required if transformation_sql is specified. Example: {"type":"struct","fields":[{"name":<span pulumi-lang-nodejs=""colA"" pulumi-lang-dotnet=""ColA"" pulumi-lang-go=""colA"" pulumi-lang-python=""col_a"" pulumi-lang-yaml=""colA"" pulumi-lang-java=""colA"">"col_a","type":"integer","nullable":true,"metadata":{}},{"name":<span pulumi-lang-nodejs=""colC"" pulumi-lang-dotnet=""ColC"" pulumi-lang-go=""colC"" pulumi-lang-python=""col_c"" pulumi-lang-yaml=""colC"" pulumi-lang-java=""colC"">"col_c","type":"integer","nullable":true,"metadata":{}}]}
- entity_
columns Sequence[str] - (list of string, deprecated) - Deprecated: Use Feature.entity instead. Kept for backwards compatibility. The entity columns of the Delta table
- filter_
condition str - (string) - Single WHERE clause to filter delta table before applying transformations. Will be row-wise evaluated, so should only include conditionals and projections
- timeseries_
column str - (string, deprecated) - Deprecated: Use Feature.timeseries_column instead. Kept for backwards compatibility. The timeseries column of the Delta table
- transformation_
sql str - (string) - A single SQL SELECT expression applied after filter_condition.
Should contains all the columns needed (eg. "SELECT , col_a + col_b AS col_c FROM x.y.z WHERE col_a > 0" would have
transformation_sql", col_a + col_b AS<span pulumi-lang-nodejs=" colC"" pulumi-lang-dotnet=" ColC"" pulumi-lang-go=" colC"" pulumi-lang-python=" col_c"" pulumi-lang-yaml=" colC"" pulumi-lang-java=" colC""> col_c") If transformation_sql is not provided, all columns of the delta table are present in the DataSource dataframe
- full
Name String - (string) - The full three-part (catalog, schema, table) name of the Delta table
- dataframe
Schema String - (string) - Schema of the resulting dataframe after transformations, in Spark StructType JSON format (from df.schema.json()). Required if transformation_sql is specified. Example: {"type":"struct","fields":[{"name":<span pulumi-lang-nodejs=""colA"" pulumi-lang-dotnet=""ColA"" pulumi-lang-go=""colA"" pulumi-lang-python=""col_a"" pulumi-lang-yaml=""colA"" pulumi-lang-java=""colA"">"col_a","type":"integer","nullable":true,"metadata":{}},{"name":<span pulumi-lang-nodejs=""colC"" pulumi-lang-dotnet=""ColC"" pulumi-lang-go=""colC"" pulumi-lang-python=""col_c"" pulumi-lang-yaml=""colC"" pulumi-lang-java=""colC"">"col_c","type":"integer","nullable":true,"metadata":{}}]}
- entity
Columns List<String> - (list of string, deprecated) - Deprecated: Use Feature.entity instead. Kept for backwards compatibility. The entity columns of the Delta table
- filter
Condition String - (string) - Single WHERE clause to filter delta table before applying transformations. Will be row-wise evaluated, so should only include conditionals and projections
- timeseries
Column String - (string, deprecated) - Deprecated: Use Feature.timeseries_column instead. Kept for backwards compatibility. The timeseries column of the Delta table
- transformation
Sql String - (string) - A single SQL SELECT expression applied after filter_condition.
Should contains all the columns needed (eg. "SELECT , col_a + col_b AS col_c FROM x.y.z WHERE col_a > 0" would have
transformation_sql", col_a + col_b AS<span pulumi-lang-nodejs=" colC"" pulumi-lang-dotnet=" ColC"" pulumi-lang-go=" colC"" pulumi-lang-python=" col_c"" pulumi-lang-yaml=" colC"" pulumi-lang-java=" colC""> col_c") If transformation_sql is not provided, all columns of the delta table are present in the DataSource dataframe
GetFeatureEngineeringKafkaConfigKeySchema
- Json
Schema string - (string) - Schema of the JSON object in standard IETF JSON schema format (https://json-schema.org/)
- Json
Schema string - (string) - Schema of the JSON object in standard IETF JSON schema format (https://json-schema.org/)
- json
Schema String - (string) - Schema of the JSON object in standard IETF JSON schema format (https://json-schema.org/)
- json
Schema string - (string) - Schema of the JSON object in standard IETF JSON schema format (https://json-schema.org/)
- json_
schema str - (string) - Schema of the JSON object in standard IETF JSON schema format (https://json-schema.org/)
- json
Schema String - (string) - Schema of the JSON object in standard IETF JSON schema format (https://json-schema.org/)
GetFeatureEngineeringKafkaConfigProviderConfig
- Workspace
Id string - Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.
- Workspace
Id string - Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.
- workspace
Id String - Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.
- workspace
Id string - Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.
- workspace_
id str - Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.
- workspace
Id String - Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.
GetFeatureEngineeringKafkaConfigSubscriptionMode
- Assign string
- (string) - A JSON string that contains the specific topic-partitions to consume from. For example, for '{"topicA":[0,1],"topicB":[2,4]}', topicA's 0'th and 1st partitions will be consumed from
- Subscribe string
- (string) - A comma-separated list of Kafka topics to read from. For example, 'topicA,topicB,topicC'
- Subscribe
Pattern string - (string) - A regular expression matching topics to subscribe to. For example, 'topic.*' will subscribe to all topics starting with 'topic'
- Assign string
- (string) - A JSON string that contains the specific topic-partitions to consume from. For example, for '{"topicA":[0,1],"topicB":[2,4]}', topicA's 0'th and 1st partitions will be consumed from
- Subscribe string
- (string) - A comma-separated list of Kafka topics to read from. For example, 'topicA,topicB,topicC'
- Subscribe
Pattern string - (string) - A regular expression matching topics to subscribe to. For example, 'topic.*' will subscribe to all topics starting with 'topic'
- assign String
- (string) - A JSON string that contains the specific topic-partitions to consume from. For example, for '{"topicA":[0,1],"topicB":[2,4]}', topicA's 0'th and 1st partitions will be consumed from
- subscribe String
- (string) - A comma-separated list of Kafka topics to read from. For example, 'topicA,topicB,topicC'
- subscribe
Pattern String - (string) - A regular expression matching topics to subscribe to. For example, 'topic.*' will subscribe to all topics starting with 'topic'
- assign string
- (string) - A JSON string that contains the specific topic-partitions to consume from. For example, for '{"topicA":[0,1],"topicB":[2,4]}', topicA's 0'th and 1st partitions will be consumed from
- subscribe string
- (string) - A comma-separated list of Kafka topics to read from. For example, 'topicA,topicB,topicC'
- subscribe
Pattern string - (string) - A regular expression matching topics to subscribe to. For example, 'topic.*' will subscribe to all topics starting with 'topic'
- assign str
- (string) - A JSON string that contains the specific topic-partitions to consume from. For example, for '{"topicA":[0,1],"topicB":[2,4]}', topicA's 0'th and 1st partitions will be consumed from
- subscribe str
- (string) - A comma-separated list of Kafka topics to read from. For example, 'topicA,topicB,topicC'
- subscribe_
pattern str - (string) - A regular expression matching topics to subscribe to. For example, 'topic.*' will subscribe to all topics starting with 'topic'
- assign String
- (string) - A JSON string that contains the specific topic-partitions to consume from. For example, for '{"topicA":[0,1],"topicB":[2,4]}', topicA's 0'th and 1st partitions will be consumed from
- subscribe String
- (string) - A comma-separated list of Kafka topics to read from. For example, 'topicA,topicB,topicC'
- subscribe
Pattern String - (string) - A regular expression matching topics to subscribe to. For example, 'topic.*' will subscribe to all topics starting with 'topic'
GetFeatureEngineeringKafkaConfigValueSchema
- Json
Schema string - (string) - Schema of the JSON object in standard IETF JSON schema format (https://json-schema.org/)
- Json
Schema string - (string) - Schema of the JSON object in standard IETF JSON schema format (https://json-schema.org/)
- json
Schema String - (string) - Schema of the JSON object in standard IETF JSON schema format (https://json-schema.org/)
- json
Schema string - (string) - Schema of the JSON object in standard IETF JSON schema format (https://json-schema.org/)
- json_
schema str - (string) - Schema of the JSON object in standard IETF JSON schema format (https://json-schema.org/)
- json
Schema String - (string) - Schema of the JSON object in standard IETF JSON schema format (https://json-schema.org/)
Package Details
- Repository
- databricks pulumi/pulumi-databricks
- License
- Apache-2.0
- Notes
- This Pulumi package is based on the
databricksTerraform Provider.
Viewing docs for Databricks v1.90.0
published on Thursday, Mar 19, 2026 by Pulumi
published on Thursday, Mar 19, 2026 by Pulumi
